I did create and actively use a similar tool, but with different purpose: configure AI tools for each team member to use the same code style and architecture guides across projects. It includes:
- build docker images for claude code and opencode dev containers.
- creates custom MCP server that works as a proxy and combines several tools into a single one ( for example, web search, fetch, and context7 tools exposed as a single "web_research" that invokes custom code to answer question )
- copy code style, documentation, and best practice rules for technologies used in our projects
- deploys a bunch of helper scripts useful for development
- configure agents, skills, hooks, and commands to use those rules. Configuration changed per "mode" : documentation, onboarding, code review, and web development all have different settings.
- run AI tools in docker container with limited permissions
- feedback tool to generate session report, that is used for automatic evaluation and prompt optimization.
This came out of necessity, as active using of AI assistants in uncontrollable way significantly degraded code quality. The goal is to enforce the same development workflow across team
This is internal tool. If someone interesting, I can create a public repo from it
Hmm, maybe it's just me, but it's a good thing the different agents use different files, different models needs different prompts. Using the same system/user prompts across all three will just give you slightly worse results in one of them, instead of getting the best results you can from each one of them. At least for the general steering system prompts.
Then for the application specific documentation, I'd understand you'd want to share it, as it stays the same for all agents touching the same codebase. But easily solved by putting it in DESIGN.md or whatever and appending "Remember to check against DESIGN.md before changing the architecture" or similar.
It's great to have the option to optimize for different models, but I'm not going to on 99% of projects. And a good chunk of the agent docs are model agnostic (how to run linter, test libraries/practices). It's cool to have a way to reuse easily, even if that's copying AGENTS.md into the right places.
> And a good chunk of the agent docs are model agnostic (how to run linter, test libraries/practices).
Personally I put stuff like that in the readme, since it's useful stuff for humans too, not directions just for machines, and I'm mostly building for other humans. The lighter and smaller the AGENTS.md end up, the better the models are at following it too, from what I can tell.
Totally valid take. Models might have different prompting guidelines for best results. If a developer uses one tool and wants to optimize their config as much as possible for that specific tool, LNAI is probably not for them.
However given how many tools there are and how fast each tool moves, I find myself jumping between them quite often, just to see which ones I like most or if some tool have improved since I lasted checked it. In this case LNAI is very helpful.
> I find myself jumping between them quite often, just to see which ones I like most or if some tool have improved since I lasted checked it. In this case LNAI is very helpful.
Most prompts I do I execute in all four at the same time, and literally compare the git diff from their work, so I understand totally :) But even for comparison, I think using the same identical config for all, you're not actually seeing and understanding the difference because again, they need different system prompts. By using the same when you compare, you're not accurately seeing the best of each model.
Fair point. LNAI does support per-tool config overrides in .ai/.{codex/claude/cursor/etc.} directories, so you kind of get the best of both worlds :) You can sync identical configs, while having the flexibility to define per-tool configs where needed, while keeping a single source of truth in the .ai/ directory.
This is interesting, but I’m now at a point where I can tell Claude/GPT to “take this skills and prompts repo and adapt them to this project” and it will just do it and out all the files in the right place… so a tool to do this seems redundant right now.
Where do you store your Claude/GPT skills? Inside .codex/.claude?
If so you could tell them to put it under .ai/skills and lnai would symlinked them to .codex/.claude configs. Single source of truth
Looks very similar. That's good - diversity and more options are good.
But ... as the author and maintainer of Ruler I can tell you that I don't use it and I don't recommend using it (or this new tool).
In almost all cases it isn't necessary anymore - most agents support AGENTS.md (or at least a hack like `@AGENTS.md` in CLAUDE.md), and Agent Skills are the best way to customise agents and are available everywhere now.
There are some corner cases where using a tool like Ruler may still make sense, but if in doubt, you probably don't need it.
Yup with simple AGENTS file and skills, tools like ruler/lnai might be an overkill. However I still think that they are needed for MCPs/Permissions/sub-dir rules/different skills formats.
I would really like all AI agents coding tools to have the same config formats, but I feel like we are not there yet :/
Yes, for MCP servers there's still no good standard. Ruler helps with that. I happen to not use MCPs much, but for a setup that is MCP-heavy that can help.
At a glance there are a few differences:
- LNAI additionally supports permissions
- The way rules for sub-directories are defined is different. Ruler defines rules for sub-directories in the sub-directories themselves under ‘.ruler/‘. LNAI defines rules for sub-directories by using front matter ‘paths’ property and storing all rules under ‘.ai/rules’
- ruler supports more tools (I will continue maintaining and improving lnai to also support more tools / configs)
- lnai supports per tool overrides in ‘.ai/.{codex/claude/etc.}’ for more granular control while keeping a single source of truth
- ruler is a more mature package
It is not really an agent package manager. It is a tool to sync agents configs.
I checked out agent-resources, cool idea, maybe if lnai gains some traction I could contribut to your tool so it supports importing skills to .ai/ :)
This solves distribution well. Curious about the change propagation story though - what happens when you update your .ai/ source and tools have cached/transformed versions?
I ran into this building a spec/skill sync system [1] - the "sync once" model breaks down when you need to track whether downstream consumers are aware of upstream changes.
For files that don't need transformation (AGENTS.md, skills, most rules), LNAI creates symlinks. .claude/CLAUDE.md → ../.ai/AGENTS.md. Edit the source, all tools see it immediately.
For transformed files (Cursor's .mdc frontmatter, GEMINI.md sub-directory rules), you re-run lnai sync. LNAI maintains a manifest tracking of every generated file with content hashes, so it knows what changed and cleans up orphans automatically.
So it's not really "sync once", it's "symlink for instant propagation, regenerate-on-demand for transforms." The manifest ensures LNAI always knows its downstream state.
This system can also break down if you create new skills/rules in the specific tool directories (.claude, .codex, etc.) but that is against LNAI's philosophy. If you need per-tool overrides you put them in `.ai/.{claude/codex/etc.}` sub-directories and LNAI manages them for you.
Yes, the tool takes that into account and transforms mcps/permissions/rules to different tool formats.
Sadly some tools might not support fine-grained permissions (e.g. Codex) in which case a warning will be displayed but everything else will get transformed/symlinked.
Additionally you can put per-tool overrides in ‘.ai/{codex/claude/cursor/etc.}/‘ if needed and lnai will automatically symlink those overrides to respective tools.
I've been using Claude Code, Cursor, and Codex on the same projects. Each tool has its own config format: Claude wants `.claude/`, Cursor wants `.cursor/`, Codex wants `.codex/`. Every time I updated a skill/rule, I had to update it in 3+ places. Usually I'd forget one, and my tools would give inconsistent suggestions.
LNAI is a CLI that lets you define your AI configs once in a `.ai/` directory:
Run `lnai sync` and it exports to native formats for 7 tools: Claude Code, Cursor, GitHub Copilot, Gemini CLI, OpenCode, Windsurf, and Codex.
The interesting part is it's not just copying files. Each tool has quirks:
- Cursor wants `.mdc` files with `globs` arrays in frontmatter
- Gemini reads rules at the directory level, so rules get grouped
- Permissions like `Bash(git:*)` become `Shell(git)` for Cursor
- Some tools don't support certain features (e.g., Cursor has no "ask" permission level). LNAI warns but doesn't fail
Where possible, it uses symlinks. So `.claude/CLAUDE.md` → `../.ai/AGENTS.md`. Edit the source, all tools see the change immediately without re-syncing.
Usage:
npm install -g lnai
lnai init # Creates .ai/ directory
lnai validate # Checks for config errors
lnai sync # Exports to all enabled tools
Agents.md is usually only a part of ai coding tool config. LNAI handles syncing additional configs like skills, sub-dir rules, mcps, permissions to every tool.
Additionally, even thought AGENTS.md is popular, not every tool supports that standard.
For simple config files like AGENTS.md/skills that is true.
But some configs like MCPs/Permissions/Rules require transformations per tool. On lnai's per-tool docs pages (e.g. https://lnai.sh/tools/codex/) I have documented what transforms are needed per-tool. There are quite a few of them.
This came out of necessity, as active using of AI assistants in uncontrollable way significantly degraded code quality. The goal is to enforce the same development workflow across team This is internal tool. If someone interesting, I can create a public repo from it
reply