AI Coding Assistants — 2026-03-22
GitHub Copilot launched a new dedicated Student plan this week, while Fortune published a major profile on Cursor's uncertain future as Anthropic and OpenAI close in. A fresh SWE-bench leaderboard update (March 2026) puts Claude Opus 4.6 at the top with 80.8% on SWE-bench Verified, and the biggest developer talking point remains Cursor's $30 billion valuation in the crosshairs of the very AI labs whose models power it.
AI Coding Assistants — 2026-03-22

GitHub Copilot coding agent for Jira is now in public preview - GitHub Changelog
Updates to GitHub Copilot for students - GitHub Changelog
GitHub Copilot CLI is now generally available - GitHub Changelog
GitHub Copilot in Visual Studio — November update - GitHub Changelog
github.blog
Official Releases & Updates
-
GitHub Copilot (Student Plan): Starting March 13, students with GitHub Education benefits have been moved onto a new dedicated GitHub Copilot Student plan. The transition includes an updated model lineup specifically curated for students. This matters for developers who are learning — it signals GitHub's ongoing investment in the student pipeline and may expose a new generation of engineers to specific Copilot model capabilities earlier in their careers.
-
GitHub Copilot (Jira Integration — Public Preview): GitHub's coding agent can now be assigned Jira issues directly, autonomously generating draft pull requests in your GitHub repository. This Jira integration, which entered public preview on March 5, bridges two of the most common developer workflows — ticket management and code submission — into a single AI-driven loop. It's a significant step toward fully autonomous issue-to-PR pipelines.
-
Windsurf (Cognition AI acquisition context): Windsurf — Codeium's AI IDE — is now owned by Cognition AI (makers of Devin) following a ~$250M acquisition in December 2025. The tool ranked #1 in LogRocket's AI Dev Tool Power Rankings in February 2026, positioning it as the budget-friendly IDE alternative at $15/month versus Claude Code. With new ownership, developers are watching whether Windsurf's product direction shifts significantly under Cognition AI's stewardship.

Developer Community Pulse
-
Cursor's existential dilemma goes mainstream: Fortune published a major feature on March 21 profiling Cursor CEO Michael Truell and framing the $30 billion startup's future as genuinely uncertain. The piece argues that while Cursor built one of the fastest-growing companies in history, competition from Anthropic (Claude Code) and OpenAI threatens to undercut it — particularly since Cursor's core value runs on the same underlying models its competitors offer. This has sparked widespread developer debate: is Cursor's scaffolding and UX moat strong enough to survive model commoditization?
-
"The scaffolding matters more than the model": A testing report from Morph LLM noted that the same underlying model can score 17 problems apart depending on which agent framework surrounds it — and that among 15 AI coding agents tested, only 3 "changed how we ship." Developers in the community have latched onto this framing, arguing it explains why tool choice matters even when model access is nearly identical across platforms.
-
Claude Code dominates among senior engineers: The Pragmatic Engineer newsletter published survey data from 900+ respondents showing Claude Code dominates tool usage among engineers, with staff+ engineers being the biggest users of AI agents. The data also showed that engineering leaders are more positive about AI productivity than individual contributors — a gap that continues to generate spirited discussion in developer communities.
Benchmarks & Comparisons
The BenchLM.ai coding leaderboard (updated through March 2026) now ranks 135 AI models across SWE-bench Pro, LiveCodeBench, HumanEval, SWE-bench Verified, and FLTEval.
Key figures from benchmark aggregators updated this week:
| Model | SWE-bench Verified |
|---|---|
| Claude Opus 4.6 | 80.8% |
| GPT-5.4 | Leads Terminal-Bench at 75.1% |
Claude Opus 4.6 leads SWE-bench Verified at 80.8%, while GPT-5.4 leads the Terminal-Bench evaluation at 75.1%, according to Onyx AI's benchmark roundup.
Notably, a $0 (free-tier) tool scored 80.8% on SWE-bench Verified in Morph LLM's Cursor alternatives shootout — matching the top cloud model score — while a $10/month option was highlighted for running three agents simultaneously.
A Reddit thread on r/LocalLLaMA this week confirmed that SWE-bench Verified and SWE-bench Pro remain the benchmarks with the most signal in the community for evaluating real coding ability, as older benchmarks like HumanEval increasingly saturate.

What to Watch Next
-
Cursor's competitive response: With the Fortune profile landing and Anthropic/OpenAI applying direct pressure, watch for Cursor to announce new differentiating features or pricing moves in the coming weeks. CEO Michael Truell's next public statements will be closely scrutinized by the developer community.
-
Windsurf under Cognition AI: The December 2025 acquisition by Cognition AI has been largely quiet since closing. With Windsurf now holding the #1 spot in LogRocket's power rankings, developer expectations are building for a product roadmap reveal that integrates Cognition/Devin's autonomous agent capabilities into Windsurf's IDE. Any announcement here would reshape the sub-$20/month IDE tier.
-
SWE-bench Pro as the new standard: The community is coalescing around SWE-bench Pro as the replacement benchmark for serious model evaluation. Watch for major labs to begin citing Pro scores in announcements as the older Verified benchmark saturates toward 80%+.
Reader Action Items
-
Students: If you have GitHub Education benefits, check that your account has been migrated to the new GitHub Copilot Student plan — log in to github.com/education and verify your model access, as the available lineup has changed as of March 13.
-
Teams using Jira + GitHub: The new Copilot coding agent for Jira is now in public preview — try assigning a well-scoped Jira ticket to the agent this week to evaluate draft PR quality on a real issue before committing to the workflow.
This content was collected, curated, and summarized entirely by AI — including how and what to gather. It may contain inaccuracies. Crew does not guarantee the accuracy of any information presented here. Always verify facts on your own before acting on them. Crew assumes no legal liability for any consequences arising from reliance on this content.
Create your own signal
Describe what you want to know, and AI will curate it for you automatically.
Create Signal