GitHub Trending Weekly 2026-04-08: Skills Ecosystem Explosion, Cloudflare Takes on WordPress, Google Goes All-In on Edge AI
Data period: 2026-04-01 to 2026-04-08 (rolling 7 days) Sources: GitHub Trending weekly + monthly, GitHub Search API, HN Algolia
TL;DR: The biggest surprise this week is Caveman — a Claude Code Skill that makes the AI "talk like a caveman," hitting 883 HN points and cutting token usage by up to 75% in real-world tests. It single-handedly elevated the Skills ecosystem to new heights. The weekly star champion is OpenScreen (+15,921 stars), a free open-source Screen Studio alternative that kept climbing after a 432-point HN discussion. Sustained momentum signal: NousResearch/hermes-agent holds a Top 5 spot for the second consecutive week, confirming the self-evolving AI agent space is heating up fast.
Fastest Growing — Weekly Star Gains Top 10
Source:
github.com/trending?since=weekly🔁 = Also appears in monthly trending (sustained momentum signal)
| # | Project | +Stars/week | Total Stars | Language | Created |
|---|---|---|---|---|---|
| #1 🔁 | siddharthvaddem/openscreen | +15,921 | 25,760 | TypeScript | 2025-10-10 |
| #2 | Yeachan-Heo/oh-my-codex | +14,101 | 18,807 | TypeScript | 2026-02-02 |
| #3 | luongnv89/claude-howto | +10,745 | 23,009 | Python | 2025-11-07 |
| #4 🔁 | NousResearch/hermes-agent | +10,487 | 35,820 | Python | 2025-07-22 |
| #5 | Yeachan-Heo/oh-my-claudecode | +7,543 | 26,134 | TypeScript | 2026-01-09 |
| #6 | onyx-dot-app/onyx | +5,449 | 25,969 | Python | 2023-04-27 |
| #7 | sherlock-project/sherlock | +5,167 | 80,482 | Python | 2018-12-24 |
| #8 | google-research/timesfm | +4,137 | 15,571 | Python | 2024-04-29 |
| #9 | google-ai-edge/gallery | +2,934 | 19,272 | Kotlin | 2025-03-31 |
| #10 | google-ai-edge/LiteRT-LM | +1,336 | 2,842 | C++ | 2025-04-14 |
Top New Repos — Born This Week Top 10
Source: GitHub Search API (
created:2026-04-01..2026-04-08, sorted by total stars)
| # | Project | Total Stars | Language | Created |
|---|---|---|---|---|
| #1 | milla-jovovich/mempalace | 23,986 | Python | 2026-04-05 |
| #2 | santifer/career-ops | 22,158 | JavaScript | 2026-04-04 |
| #3 | Gitlawb/openclaude | 19,425 | TypeScript | 2026-04-01 |
| #4 | safishamsi/graphify | 10,572 | Python | 2026-04-03 |
| #5 | emdash-cms/emdash | 8,380 | TypeScript | 2026-04-01 |
| #6 | HKUDS/OpenHarness | 7,600 | Python | 2026-04-01 |
| #7 | JuliusBrussee/caveman | 6,954 | Python | 2026-04-04 |
| #8 | ultraworkers/claw-code-parity | 6,618 | Rust | 2026-04-02 |
| #9 | kevinrgu/autoagent | 3,864 | Python | 2026-04-02 |
| #10 | 0xGF/boneyard | 3,707 | TypeScript | 2026-04-01 |
Spotlight — Fastest Growing Top 10
#1 — siddharthvaddem/openscreen | Free Open-Source Screen Studio Alternative
Create stunning demos for free. Open-source, no subscriptions, no watermarks, and free for commercial use. An alternative to Screen Studio.
This week +15,921 stars | Total 25,760 | TypeScript | MIT | Website
OpenScreen is this year's most compelling "direct replacement" story. Screen Studio charges $89/year and up; OpenScreen packages the same tier of features — window or full-screen recording, auto/manual zoom, custom backgrounds, motion blur — into an MIT-licensed Electron desktop app. Zero cost, no watermarks, commercial use allowed.
The surge traces back to an April 1 HN thread: 432 points, 73 comments. The core debate: can a free open-source tool truly replace a paid app deeply optimized for macOS? Most commenters concluded that OpenScreen's "good enough" bar is significantly higher than previous open-source options (OBS, ShareX) — more than sufficient for typical developer demo recordings. With 1,718 forks and frequent updates (last push: 2026-04-08), community engagement runs deep.
What this means for you: if you're paying $89/year mainly to record dev demos, OpenScreen deserves a try first.
#2 — Yeachan-Heo/oh-my-codex | Turning OpenAI Codex into a Multi-Agent Workstation
OmX - Oh My codeX: Your codex is not alone. Add hooks, agent teams, HUDs, and so much more.
This week +14,101 stars | Total 18,807 | TypeScript | 2026-02-02
oh-my-codex (OmX) is the OpenAI Codex CLI extension layer maintained by the same author as oh-my-claudecode, Yeachan-Heo. Both share a core philosophy: "One tool isn't enough — multi-agent coordination is the real productivity unlock." OmX uses tmux to run multiple AI CLI workers in parallel within a single terminal window (Codex + Claude side by side), with custom hooks and a visual HUD that turns complex parallel execution from "manually babysit each process" into "configure once and watch it run."
The combined weekly gains of this repo (+14,101) and oh-my-claudecode (+7,543, see #5) show the entire Yeachan-Heo ecosystem is building a dedicated following. Developers choosing AI coding tools are no longer just comparing which CLI is better — they're evaluating which orchestration layer fits their workflow best.
#3 — luongnv89/claude-howto | An Engineer's Hands-On Claude Code Guide
A visual, example-driven guide to Claude Code — from basic concepts to advanced agents, with copy-paste templates that bring immediate value.
This week +10,745 stars | Total 23,009 | Python (docs-focused) | MIT
A documentation-only repo with 2,765 forks — a strong signal of practical utility. claude-howto's differentiator is its focus on "copy-paste-ready templates" rather than progressive long-form tutorials. Every topic comes with a CLAUDE.md snippet or Skill definition you can drop directly into your setup, saving the "I read it all but don't know where to start" gap.
Coverage spans from basics (how to set up CLAUDE.md) to advanced patterns (multi-agent collaboration, custom hooks, context compression techniques). In this wave of Claude Code Skill ecosystem growth, it's one of the most popular "starter maps" available.
#4 🔁 — NousResearch/hermes-agent | The Self-Evolving Open-Source AI Agent Framework
The agent that grows with you.
This week +10,487 stars | Total 35,820 | Python | MIT | Website
Hermes Agent is the only AI agent framework appearing in both weekly and monthly trending (🔁) this week — meaning its momentum is not a one-time spike. The v0.7.0 release on 2026-04-03 introduced a genuinely functional self-evolution loop: after each completed task, the agent automatically writes a reusable Markdown Skill file into SQLite. On similar future tasks, it searches its own memory store first (FTS5 full-text index, ~10ms latency even with 10k+ Skills).
Hermes takes a fundamentally different path from the claw-code ecosystem — rather than competing for Claude Code's position, it operates as a model-neutral framework compatible with any provider (OpenRouter 200+ models, OpenAI, self-hosted endpoints). With 35,820 stars and 4,543 forks, community adoption is well ahead of its HN discussion (peaked at just 4 points) — a textbook case of word-of-mouth developer diffusion.
#5 — Yeachan-Heo/oh-my-claudecode | Multi-Agent Orchestration Layer for Claude Code
Teams-first Multi-agent orchestration for Claude Code.
This week +7,543 stars | Total 26,134 | TypeScript | MIT | Website
The v4.1.7 update makes Team the official orchestration entry point — 32 specialized agents, 40+ Skills, intelligent parallelization. One command spins up multiple Claude panes working on different subtasks simultaneously, with automatic result merging.
Combined with oh-my-codex (#2), the Yeachan-Heo ecosystem gained over 21,000 stars this week alone, making its multi-agent orchestration ecosystem impossible to ignore. For users accustomed to asking one question at a time, this represents a fundamental workflow shift: instead of "can Claude do X?" the question becomes "how many agents should I spin up to finish X fastest?"
#6 — onyx-dot-app/onyx | Full-Featured Enterprise Open-Source AI Chat Platform
Open Source AI Platform - AI Chat with advanced features that works with every LLM.
This week +5,449 stars | Total 25,969 | Python | Website
Onyx is one of the most feature-complete self-hosted enterprise AI chat solutions available, supporting RAG knowledge bases, multi-LLM switching, enterprise search, and vector search. Founded in 2023 and now at 25k+ stars, its 3,465 forks indicate it's not just being used — it's being heavily forked and customized. A strong option for teams wanting full control over RAG + Chat on their own infrastructure.
#7 — sherlock-project/sherlock | Cross-Platform Username Tracking OSINT Tool
Hunt down social media accounts by username across social networks.
This week +5,167 stars | Total 80,482 | Python | MIT | Website
Sherlock is one of GitHub's most well-known OSINT tools, supporting username lookups across 400+ social platforms with a simple pip install. This week's +5,167 gain has no obvious trigger event — likely seasonal cybersecurity course demand or social media resharing.
Risk disclosure: In some jurisdictions, tracking someone's accounts without authorization may raise privacy law concerns. Check your local regulations before use.
#8 — google-research/timesfm | Google's Time-Series Foundation Model Gets 2.5 Update
TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
This week +4,137 stars | Total 15,571 | Python | Apache-2.0 | Official blog
The TimesFM 2.5 release on 2026-03-31 drove this week's surge. Key updates: 200M parameters, 16,000-step context window, optional 30M continuous quantile prediction head (forecasting up to 1,000 steps), and restored covariate (XReg) support. Checkpoints and official integrations are available on Hugging Face and Google BigQuery.
The HN discussion (324 points, 109 comments) centered on whether Transformer architectures can truly generalize to time-series forecasting. Most commenters agreed TimesFM's zero-shot performance is competitive with supervised methods in specific domains (retail, web traffic), but still falls short on complex financial time series.
What this means for you: if your work involves periodic data forecasting (sales, traffic, inventory), TimesFM 2.5 is currently the best zero-shot baseline to try first — no training data needed, download and go.
#9 — google-ai-edge/gallery | Google's Official On-Device AI App Showcase
A gallery that showcases on-device ML/GenAI use cases and allows people to try and use models locally.
This week +2,934 stars | Total 19,272 | Kotlin | Apache-2.0
Google AI Edge Gallery is an open-source reference app for Android (Kotlin) + iOS (Swift), demonstrating various on-device inference scenarios with models like Gemma. This week's gains appeared in sync with LiteRT-LM (#10), likely driven by the Google Developers Blog post on Gemma 4 on-device agentic skills.
v0.10.1 additions: 128K context local conversations, on-device function calling, and Gemma 4 inference accelerated via LiteRT-LM. Fully open-source — fork it as a starting point for your own on-device AI app.
#10 — google-ai-edge/LiteRT-LM | Google's Edge LLM Inference Framework Goes Open-Source
This week +1,336 stars | Total 2,842 | C++ | Apache-2.0 | Official docs
LiteRT-LM is Google AI Edge's cross-platform (Android, iOS, Web, Desktop, Raspberry Pi, etc.) high-performance LLM edge inference engine, officially open-sourced this week. It supports constrained decoding for improved output accuracy in agentic workflows, along with a LiteRT-LM CLI tool.
Key context: Gallery (#9)'s on-device Gemma 4 inference is actually powered by LiteRT-LM under the hood. Both repos appearing in this week's Trending signals Google systematically pushing on-device AI from "experimental demo" to "production-deployable infrastructure."
Spotlight — Top New Repos Top 10
#1 — milla-jovovich/mempalace | Actress Milla Jovovich's AI Memory System
The highest-scoring AI memory system ever benchmarked. And it's free.
Total 23,986 stars | Python | MIT | Created: 2026-04-05 | Website
The most unexpected repo of the week: Resident Evil star Milla Jovovich collaborated with engineer Ben Sigman to build MemPalace using Claude Code over several months. Her motivation: frustration with existing AI memory systems that "decide what to remember for you" while discarding the reasoning context and nuance she actually needed.
Technical core: A memory palace-inspired architecture with three layers — wings (people and projects) / halls (memory types) / rooms (specific concepts). Instead of letting AI decide what to filter, it retains all conversation content with structured retrieval replacing flat vector search. According to official benchmarks, retrieval quality jumped 34 percentage points from flat vector search (60.9%) to this structure (94.8%), outperforming paid competitors Mem0 and Zep (~85%).
Benchmark controversy: MemPalace claims LongMemEval R@5 of 100%, but the community noted this figure required targeted fixes for 3 failing test cases plus Haiku reranking. The honest score without reranking is 98.4%. The HN discussion (55 points) focused primarily on this "benchmark honesty" question.
Note before use: the repo is only 3 days old (created 2026-04-05), and the star surge benefits from celebrity effect. Run your own validation before any production deployment.
#2 — santifer/career-ops | Automating the Entire Job Search with Claude Code Skills
AI-powered job search system built on Claude Code. 14 skill modes, Go dashboard, PDF generation, batch processing.
Total 22,158 stars | JavaScript | MIT | Created: 2026-04-04
Author Santiago, a Head of Applied AI, built this system to manage his own job search — evaluating 740+ positions, generating 100+ tailored resumes, and ultimately landing his current role through it. The entire system runs locally; resumes and personal data stay on your machine.
Its 14 Skill modes cover: job scoring (A-F across 10 dimensions), ATS-optimized resume generation (auto-injecting keywords per JD), Playwright-based auto-fill, batch processing with parallel sub-agents, and a Go dashboard for visual progress tracking. With 4,143 forks, a large number of people are directly forking and customizing it.
The core value isn't "AI writes your resume" — it's reducing the marginal cost of batch evaluation and customization to near zero.
#3 — Gitlawb/openclaude | Making Claude Code's Architecture Work with Any LLM
Open Claude Is Open-source coding-agent CLI for OpenAI, Gemini, DeepSeek, Ollama, Codex, GitHub Models, and 200+ models via OpenAI-compatible APIs.
Total 19,425 stars | TypeScript | Created: 2026-04-01
One of the most notable derivative projects since the Claude Code source leak — rewriting Claude Code's agent architecture into an open-source CLI compatible with 200+ models. Its 6,816 forks (far above most repos of similar size) show developers running extensive model-switching experiments on this foundation.
Note: The licensing status of repos derived from the source leak remains unclear. Exercise caution regarding legal risk before any production use.
#4 — safishamsi/graphify | AI Skill: Turn Any Folder into a Queryable Knowledge Graph
AI coding assistant skill. Turn any folder of code, docs, papers, or images into a queryable knowledge graph.
Total 10,572 stars | Python | MIT | Created: 2026-04-03 | PyPI
graphify positions itself as a cross-platform AI coding assistant Skill (supporting Claude Code, Codex, OpenClaw, etc.) that builds a GraphRAG knowledge graph from any folder. Its core advantage: traditional RAG performs poorly on structural questions like "how does module A affect module B." By preserving call relationships and dependency chains in a knowledge graph, graphify can theoretically deliver significantly higher accuracy for such queries.
#5 — emdash-cms/emdash | Cloudflare's Spiritual Successor to WordPress
EmDash is a full-stack TypeScript CMS based on Astro; the spiritual successor to WordPress.
Total 8,380 stars | TypeScript | MIT | Created: 2026-04-01 | Website
The most strategically significant new project this week. Cloudflare officially launched EmDash on 2026-04-01, positioning it as "the successor that fixes WordPress's plugin security problem." According to Patchstack data cited by Cloudflare, 96% of WordPress security issues originate from plugins.
EmDash's answer is a sandboxed plugin architecture: every plugin runs in an isolated Dynamic Worker with explicitly declared permissions and no direct access to the core system. The stack: full TypeScript, Astro frontend framework, Kysely database abstraction layer (supporting Cloudflare D1 + R2 or self-hosted SQLite), and content stored as structured JSON. Built-in AI agent and MCP server integration means it's designed from the ground up for AI-driven content operations.
Current limitations: v0.1.0 developer preview only. Do not migrate production WordPress sites yet. WordPress PHP plugins are incompatible and must be rewritten as EmDash native TypeScript plugins.
#6 — HKUDS/OpenHarness | Open Agent Harness from HKU
OpenHarness: Open Agent Harness.
Total 7,600 stars | Python | MIT | Created: 2026-04-01
From the University of Hong Kong's Data Science Lab, OpenHarness is the most academically grounded new repo this week. It serves as a universal agent harness for any LLM — not locked to a specific AI service, but providing general-purpose infrastructure for task execution, tool invocation, and state management.
#7 — JuliusBrussee/caveman | The Week's Biggest Community Hit: A Skill That Cuts 75% of Tokens
why use many token when few token do trick — Claude Code skill that cuts 65% of tokens by talking like caveman.
Total 6,954 stars | Python | MIT | Created: 2026-04-04 | Website
This week's true community sensation. HN 883 points, 361 comments made Caveman the single most-discussed repo in the HN developer community this week. Hackaday's headline: "So Expensive, A Caveman Can Do It."
The concept is dead simple: a CLAUDE.md Skill that instructs Claude Code to "drop articles, skip pleasantries, cut filler, keep technical terms and code." Essentially, it makes the AI speak in caveman-style minimal English.
Measured results (official benchmark): average 65% output token savings across standard software engineering tasks, with individual tasks reaching up to 87% reduction and a minimum of 22%. Caveman only affects output tokens — reasoning tokens remain unchanged. A companion memory compression tool can further cut ~45% of input tokens per session.
To enable: npx skills add JuliusBrussee/caveman, then type /caveman in a Claude Code conversation to toggle it on.
Note: The repo description says 65% while some media report 75% — the difference comes from measurement methodology (average vs. best case). Run your own benchmarks on your actual workflow.
#8 — ultraworkers/claw-code-parity | Rust Port of claw-code
claw-code Rust port parity work.
Total 6,618 stars | Rust | Created: 2026-04-02
This is the Rust rewrite project for ultraworkers/claw-code (the repo that broke 100k stars fastest after the Claude Code leak). Its 5,422 forks nearly match the star count, showing a massive number of engineers are actively contributing to the port rather than just watching.
The Rust port goals: faster runtime, memory-safe harness architecture, and a clean-room reimplementation. The dev/rust branch hasn't merged into main yet, and the legal status remains equally unclear.
#9 — kevinrgu/autoagent | Letting AI Agents Improve Their Own Harness
autonomous harness engineering.
Total 3,864 stars | Python | Created: 2026-04-02
autoagent's concept is straightforward: let an AI agent run overnight and autonomously identify and fix the least efficient parts of its own harness. It shares some overlap with Hermes Agent's "self-evolution" approach, but autoagent focuses specifically on engineering infrastructure optimization rather than task-level memory accumulation.
#10 — 0xGF/boneyard | Auto-Generate Pixel-Perfect Skeleton Screens from Real DOM
Auto generated skeleton loading framework.
Total 3,707 stars | TypeScript | MIT | Created: 2026-04-01 | Website
The only pure frontend tool to go viral this week. Boneyard does one thing: scan your existing DOM and auto-generate pixel-accurate skeleton loading screens without writing any skeleton CSS. The HN discussion (31 points, 17 comments) agreed it solves a pain point "every frontend engineer has experienced but nobody seriously addressed" — skeleton screens that don't match the actual UI layout.
Monthly Trend Cross-Reference
Two repos appeared in both weekly and monthly trending (🔁) this week:
siddharthvaddem/openscreen: Sustained monthly trending for multiple weeks. This isn't a one-time spike — it's organic growth driven by developers actually using it for demos and recommending it to colleagues.
NousResearch/hermes-agent: Continued monthly momentum. The v0.7.0 self-evolution update helps it maintain a unique position amid intense Claude Code ecosystem competition — model-neutral, self-accumulating, deeply personalizable. In a landscape where everyone is chasing Claude Code users, this is a genuine differentiation strategy.
Trend Insights
The Skills ecosystem is graduating from personal hacks to real infrastructure
This week saw caveman (token compression), graphify (knowledge graphs), career-ops (job search automation), nuwa-skill (thought distillation), and claude-howto (documentation) all trending simultaneously, collectively exceeding 65,000 stars. A week ago these were personal tools; now they have dedicated PyPI packages, websites, and Discord servers. The Skills framework is no longer "a small Claude Code feature" — an independent distribution and commercialization channel is taking shape.
Google's on-device AI strategy enters delivery phase
TimesFM 2.5 (time-series forecasting), AI Edge Gallery update (on-device Gemma 4), and LiteRT-LM (edge inference framework) — three repos appearing in the same week's Trending is no coincidence. Google is pushing two years of on-device AI research from "papers and demos" to "production-deployable open-source infrastructure." The practical impact for mobile developers: LiteRT-LM is now a serious official inference engine option worth evaluating.
The "open-source alternative" cycle keeps accelerating
OpenScreen replacing Screen Studio, EmDash challenging WordPress, openclaude making Claude Code's architecture work with any LLM, claw-code-parity rewriting the Claude Code runtime — more than half of this week's top-starred repos are open-source alternatives or reimplementations of paid or closed-source tools. Leaks + open-source community + AI-assisted development have dramatically lowered the barriers to reverse engineering and reimplementation. This replacement cycle shrinking from years to months is a defining characteristic of the 2026 open-source landscape.


