Files
app/lib/ai.ts
Albert e43d6493d2 feat: Step 6 - Write-through cache API
Implement the core write-through cache pattern for node creation.
This is the architectural foundation of the application.

Changes:
- Add @google/generative-ai dependency for embeddings
- Create lib/db.ts: SurrealDB connection helper with JWT auth
- Create lib/ai.ts: AI embedding generation using text-embedding-004
- Create app/api/nodes/route.ts: POST endpoint implementing write-through cache

Write-through cache flow:
1. Authenticate user via SurrealDB JWT
2. Publish node to ATproto PDS (source of truth)
3. Generate 768-dimensional embedding via Google AI
4. Cache node + embedding + links in SurrealDB

Updated schema to use 768-dimensional embeddings (text-embedding-004)
instead of 1536 dimensions.

Security:
- Row-level permissions enforced via SurrealDB JWT
- All secrets server-side only
- ATproto OAuth tokens from secure cookies

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-09 00:12:46 +00:00

25 lines
797 B
TypeScript

import { GoogleGenerativeAI } from '@google/generative-ai';
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY!);
const embeddingModel = genAI.getGenerativeModel({
model: 'text-embedding-004',
});
/**
* Generates a vector embedding for a given text using Google's text-embedding-004 model.
* The output is a 768-dimension vector (not 1536 as originally specified).
*
* @param text - The text to embed
* @returns A 768-dimension vector (Array<number>)
*/
export async function generateEmbedding(text: string): Promise<number[]> {
try {
const result = await embeddingModel.embedContent(text);
return result.embedding.values;
} catch (error) {
console.error('Error generating embedding:', error);
throw new Error('Failed to generate AI embedding.');
}
}