ATQ AI Operating System¶
Goal¶
Describe atlantyqa-universe as an OSS AI Operating System focused on:
- governance models
- humanized ops
- human-agent collaboration
- multi-platform operation through
atq
It is not just a scripts repository. It is a coordination layer for people, agents, policies, runbooks, and evidence.
Core idea¶
atq should feel like the short human interface of the system.
The internal complexity exists, but it should not be the user's first experience. The right entry layer is:
- a clear human intent
- a short route by role or device
- a visible validation sequence
- a traceable outcome
Recommended layers¶
| Layer | What it solves | Where it lives |
|---|---|---|
interface |
short human commands and role-based entrypoints | scripts/atq, docs/atq/** |
governance |
HILT, policies, gates, approvals, decisions | docs/internal/policies/**, gitops/**, .github/** |
humanized-ops |
runbooks, FSM, live sessions, UX/UI, onboarding | docs/portal/**, docs/ops/**, scripts/proxmox/** |
knowledge-os |
bitacora, agent-context, datasets, ontologies | docs/internal/**, knowledge/**, schemas/** |
runtime-adapters |
Proxmox, MicroK8s, Docker, GitHub, devices | scripts/proxmox/**, scripts/microk8s/** |
contributor-layer |
repo development, testing, release, PRs | README.md, GET-STARTED.md, CONTRIBUTING.md |
What "humanize ops" means¶
Humanize ops is not just friendlier text. It means:
- fewer implicit steps
- clearer routes by role
- explicit critical thinking
- state-based validation instead of intuition
- controlled cognitive load
- visible evidence before promotion
Human governance over AI and mobile systems¶
ATLANTYQA starts from a simple thesis:
- the AI, mobile, and automated systems around us can and should remain governable by humans
- control must not be delegated to opaque black boxes without traceability
- mobile cannot become an interface of pressure, fatigue, or automatic obedience
That implies:
- short and understandable mobile routes
- visible human decision before sensitive actions
- Socratic questions instead of opaque automation
- preservation of evidence and intellectual property through
local-first - explicit protection of mental health and cognitive load in the user experience
What "vibe coding" means here¶
We use vibe coding in a practical, governed way:
- learning by doing
- short iterations
- one main goal per block
- immediate feedback
- prompts and CLI as a teaching interface
- without sacrificing traceability or gates
This is not "coding by impulse". It is fast exploration with guardrails.
Learning by doing¶
The smallest useful learning unit should be:
- understand the goal
- execute one small action
- validate the result
- record evidence
- decide whether to continue, fix, or block
That pattern already fits the repository FSM model.
Recommended usage model¶
1. Use atq¶
For people who want to operate the system from a device:
- start with
docs/atq/get-started-multiplatform.en.md - use short entrypoints
- avoid deep internal runbooks unless needed
2. Operate ATLANTYQA¶
For governance, ops, or coordination roles:
- use
docs/portal/** - follow UI/UX validation, FSM, and live session guidance
- apply HILT and approval criteria
3. Build ATLANTYQA¶
For repo contributors:
- follow
GET-STARTED.md - use
.venv - validate local-first before PR
Documentation design rule¶
Documentation should not mix in the same layer:
- how to use
atq - how to develop the repo
- how to run advanced internal runbooks
The first one should be short and multi-platform. The second should be contributor-first. The third should be deep and specialized.
Expected outcome¶
If this architecture is applied well:
- a new collaborator understands how to start in minutes
- a governance actor understands where to decide
- a junior can operate through FSM instead of intuition
atqis perceived as the visible face of the system- internal runbooks stay decoupled from the main entry layer