The Pentagon blacklists Anthropic over safety rules 🪖 - Edition 007
Edition Summary
The Pentagon signs AI deals with 8 tech giants and blacklists Anthropic for insisting on safety guardrails. Mayo Clinic's AI catches pancreatic cancer 3 years early. Kimi K2.6 ships with 300-agent swarms. And a simple prompting trick that gets you better outputs on the first try.
Today: The Pentagon signs AI deals with 8 tech giants and blacklists Anthropic for insisting on safety guardrails. Mayo Clinic’s REDMOD model catches pancreatic cancer 3 years before doctors can see it. Kimi K2.6 ships with 300-agent swarms. And a simple prompting trick that gets you better outputs on the first try.
💬 Daily Quote
“Clarity is not something you find. It is something you create by removing the things that do not belong.”
📌 TL;DR
Today: The Pentagon signs AI deals with 8 tech giants and blacklists Anthropic for insisting on safety guardrails. Mayo Clinic’s AI detects cancer 3 years early from routine scans. Kimi K2.6 ships a 1-trillion-parameter open-source model with 300-agent swarms. And a prompting fix that makes your first draft noticeably better.
🏆 LLM Leaderboard #AI #Models
The most popular models this week.
🧠 Top by Intelligence Score
🔥 Top by Actual Usage
Kimi K2.6 re-enters the intelligence chart at 55 after its full public release last week. With 300 sub-agent swarm support and a 1-trillion-parameter architecture, it is now the most capable open-source model for multi-step workflows. The usage gap stays wide. MiMo-V2-Pro still leads daily traffic while GPT-5.5 sits untouched at the top of benchmarks. Chinese-built models continue to account for over 45% of total OpenRouter traffic.
Source: OpenRouter Rankings + Artificial Analysis Benchmarks
📰 Oh, So AI did this #AI #News
1. Pentagon signs AI deals with 8 tech giants. Anthropic is blacklisted for asking for safety rules.
The Department of Defense signed agreements with OpenAI, Google, Microsoft, Nvidia, Amazon Web Services, Oracle, SpaceX, and Reflection to deploy AI inside classified networks. Anthropic was cut out entirely. The reason: Anthropic refused to let the military use Claude for “all lawful purposes” without safety guardrails around autonomous weapons and mass surveillance. The Trump administration labelled Anthropic a “supply chain risk,” a tag previously reserved for foreign adversaries. Anthropic sued and a federal judge blocked the blacklist last month. Talks have reopened. Per CNN.
Read more at CNN #AI #News
2. Musk v. Altman trial: Expert witness says the AGI arms race is the real danger.
Stuart Russell, UC Berkeley computer science professor, testified on behalf of Musk in Oakland. He told the jury that the biggest threat is not any single company but the winner-take-all race to reach AGI first. OpenAI’s lawyers limited his testimony on existential risks. Separately, Musk admitted on the stand that xAI distils OpenAI’s models. The trial continues this week. Per TechCrunch and MIT Technology Review.
Read more at TechCrunch #AI #News
3. Mayo Clinic’s AI catches pancreatic cancer 3 years before diagnosis. From routine scans.
REDMOD (Radiomics-based Early Detection Model) identified 73% of prediagnostic pancreatic cancers at a median of 16 months before doctors spotted them. On scans taken more than 2 years before diagnosis, REDMOD caught nearly 3x more early cancers than specialists reviewing the same images. It works on routine CTs, no special scan needed. Published in the journal Gut. Per Mayo Clinic News Network.
Read more at Mayo Clinic #AI #News
4. Anthropic’s “Code with Claude” developer conference is tomorrow. A new model may drop.
Anthropic hosts its first developer conference in San Francisco on May 6. Internal red-teaming for “Jupiter V1” was spotted before the event, matching the pattern Anthropic used before launching the Claude 4 family last year. Sonnet 4.8 references have leaked in Claude Code source. London (May 19) and Tokyo (June 10) events follow. Per Anthropic and TestingCatalog.
Read more at Anthropic #AI #News
Enjoying this breakdown?
Join 4,200+ others getting these simple translations every Tuesday and Friday.
🪄 Oh, So AI can do that?! #AI #Tools
1. Kimi K2.6 ships in full. 1 trillion parameters. 300 sub-agents. Open-source.
Moonshot AI’s Kimi K2.6 is now fully available via API, web, and app. It is a 1-trillion-parameter sparse MoE model with 32 billion active parameters per token. The headline feature: agent swarms. Give it a complex project and it decomposes it into up to 300 parallel sub-agents running 4,000 coordinated steps. Also generates polished frontend interfaces and slide decks from simple prompts. Per Kimi Blog.
Read more at Kimi #AI #Tools
2. Amazon launches AI-led job interviews. No human interviewer required.
Amazon Connect Talent (Preview) is an agentic AI hiring tool that conducts full interviews, runs science-backed assessments, and scores candidates consistently. Built for high-volume hiring where you need quality at scale without burning recruiter hours. Currently in preview for AWS customers. Per AWS Weekly Roundup.
Read more at AWS #AI #Tools
3. Swoogo ships a native MCP server. Your event data now talks to any AI tool.
Swoogo launched a Model Context Protocol server that connects live event data, registrations, sessions, and attendee profiles, to any MCP-compatible AI tool. Ask questions about your event in natural language instead of digging through dashboards. Per Skift Meetings.
Read more at Skift Meetings #AI #Tools
⚡ Oh, So I can do this #SEO #WebDev
1. Google Preferred Sources goes global. Users can now choose which sites they see more.
As of April 30, Google’s Preferred Sources feature works in all supported languages worldwide. Users mark publishers they want to see more often in Top Stories. Over 200,000 unique sites have been selected so far. Google says readers are 2x more likely to click through after marking a site as preferred. Per 9to5Google and Google Blog.
2. Google now rewards original insight over rewrites. Generic summaries lose ground.
Updated guidance from Google’s 2026 documentation: a page that covers a trending topic by rewriting what already exists is less likely to earn durable visibility. What wins: original data, expert analysis, unique reporting, and fresh perspective. If your content strategy is “summarise the news,” expect declining returns. Per Search Engine Land.
Read more at Search Engine Land #SEO
3. Anthropic’s “Code with Claude” could reshape how developers build for the web.
Tomorrow’s developer conference in SF focuses on agentic AI in the software development lifecycle. Live demos, workshops, and potential new model drops. If you build web products with Claude Code or the API, this event sets the agenda for the next quarter of Anthropic’s developer tools. Per Anthropic.
Read more at Anthropic #WebDev
🧠 Oh, So that’s how you do it #AI #Tips
Front-load your constraints. Do not save them for the end.
LLMs process text left to right. When you put your constraints (tone, word count, audience, format) at the start of the prompt, the model applies them to everything it writes. When you put them at the end, the model has already committed to a direction and tries to retrofit.
Before: “Write me a blog post about remote work. Keep it under 300 words, casual tone, for freelancers.”
After: “You are writing for freelancers. Casual tone. Under 300 words. Topic: why remote work is not the same as flexible work.”
The second version nails it on the first try. The first version usually needs a rewrite.
Source: Lakera Prompt Engineering Guide #AI #Tips
Use K2.6’s agent swarm logic even without K2.6. Break one big prompt into parallel sub-tasks.
Kimi K2.6’s killer feature is decomposing projects into 300 parallel agents. You can steal this pattern manually with any model. Instead of asking one prompt to “create a full marketing plan,” break it into 5 separate prompts: audience research, messaging angles, content calendar, channel strategy, KPIs. Run them in parallel. Combine the outputs into one document.
This works because each sub-task gets the model’s full attention instead of splitting focus across a massive, vague request.
Source: Kimi K2.6 Blog #AI #Tips
📝 Latest from the Blog
The Anatomy of an AI-Conducted Workflow (Beyond Prompting)
Most people use AI as a chatbot. Here is what an actual AI-conducted workflow looks like, with a real case study from Square Root SEO’s rebrand. Prompting is a skill, but AI orchestration is a system design practice. If you are rewriting outputs at the review stage, the problem is in your earlier layers.
🎣 Top Hook Ideas
Hook 1: The Pentagon just signed AI deals with 8 tech companies. The one company that asked for safety rules? Blacklisted. That tells you everything about where this is heading. #AI
Hook 2: An AI model just detected pancreatic cancer 3 years before doctors could see it. On routine scans people already get. This is not future tech. It is published science. #AI
Hook 3: Elon Musk admitted in court that xAI distils OpenAI’s models. The man suing OpenAI for being closed-source is using their outputs to train his own AI. #AI
Hook 4: Kimi K2.6 just shipped with 300 AI agents working in parallel on one project. One prompt. 4,000 coordinated steps. Your solo workflow just got a 300-person team. #Tools
Hook 5: Most people put their constraints at the end of an AI prompt. Move them to the top. The model reads left to right. Your output quality changes immediately. #Tips
Did 'Oh, So AI' help you understand something new today?
Forward this to a friend who is overwhelmed by AI.
Curated with ❤️ ByHarshal
Creative Director orchestrating AI workflows for founders' teams. Writing about productivity, design, and AI systems.
You made it to the end! 🎉
Subscribers got this days ago. Drop your email or join our WhatsApp community to get the next one first.