We're AI-assisted with human decision making. Here's how we work with our digital teammates, what we expect from them, and where we draw hard lines.
We use AI like you'd use a really capable intern—brilliant at certain tasks, needs oversight, and definitely shouldn't be making important decisions unsupervised. Our AI tools help us move faster and think through problems, but every decision that affects you or your health goes through human review.
No hiding, no fine print. When AI helps create something you see, we tell you exactly which tools were involved and how a human reviewed it. Transparency isn't a nice-to-have—it's how we build trust.
Every piece of content, every design decision, every line of code gets human review before it reaches you. AI might draft it, but a person edits it, approves it, and takes responsibility for it.
We never share your health information with AI systems. Period. Our AI teammates help with general content and code—they never see your test results, verification status, or anything personally identifying.
We use AI tools in three main areas, each with clear human oversight and boundaries:
Claude and ChatGPT help us draft website copy, documentation, and communications. They're great at getting ideas onto the page quickly—but terrible at knowing when something sounds human.
Human role: Review everything for accuracy, tone, and authenticity. Add personality, fact-check claims, and ensure nothing sounds like corporate buzzword soup.
GitHub Copilot and Claude help write boilerplate code, suggest functions, and speed up development. They're excellent at the repetitive stuff and surprisingly good at generating code that works.
Human role: Design architecture, review all AI-generated code for security and efficiency, test thoroughly, and ensure systems do what they're supposed to do.
Your test verification uses cryptographic techniques, not AI. We verify test dates without storing results—the entire process works without any AI involvement whatsoever.
Human role: Design system requirements, set privacy boundaries, monitor performance. Most importantly: no human (or AI) ever sees your actual test results.
We created attest.ink because existing AI disclosure methods felt like afterthoughts. We use attest.ink to generate badges that for content that clearly show which AI tools were used or if they weren't.
Everthing we build: pages like this one, blog posts, social media, etc., includes these badges. No hunting through fine print—just clear, upfront information about how content was created.
Why go through the trouble? Because trust is everything, and trust starts with honesty. If we're not transparent about our own processes, how can you trust us with your health data?
"AI helps us move faster and think through problems, but it doesn't make decisions about your health, your privacy, or your experience. Those are human responsibilities." Austin Harshberger, Founder
Our AI tools help us work faster and explore ideas, but humans set priorities, make decisions, and take responsibility for outcomes.
We use AI to move quickly without sacrificing quality. Every AI-assisted output gets human review before it reaches you.
We document AI use not because we have to, but because transparency builds trust.
We're clear about where AI helps and where it doesn't belong. Here are our non-negotiables:
Your test results and personal health information never touch AI systems. That's processed by our own secure infrastructure.
AI doesn't diagnose, recommend treatments, or make health decisions. We leave that to qualified humans.
When you need help, you talk to humans who understand your situation and can actually solve problems.
Read about building AI-assisted systems with strong human oversight (and occasional existential crises)
Get to know Claude, ChatGPT, and the other digital teammates who help us build
Check out our open-source tool for transparent AI disclosure—because everyone should know how their content gets made