feat: Improve UI layout and navigation
- Increase logo size (48x48 desktop, 56x56 mobile) for better visibility - Add logo as favicon - Add logo to mobile header - Move user menu to navigation bars (sidebar on desktop, bottom bar on mobile) - Fix desktop chat layout - container structure prevents voice controls cutoff - Fix mobile bottom bar - use icon-only ActionIcons instead of truncated text buttons - Hide Create Node/New Conversation buttons on mobile to save header space - Make fixed header and voice controls work properly with containers 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
21
.env.test.example
Normal file
@@ -0,0 +1,21 @@
|
||||
# Test Environment Configuration
|
||||
# Copy this file to .env.test and fill in your test credentials
|
||||
|
||||
# Bluesky Test Account Credentials
|
||||
# Create a dedicated test account on bsky.social for testing
|
||||
TEST_BLUESKY_USERNAME=your-test-user.bsky.social
|
||||
TEST_BLUESKY_PASSWORD=your-test-password
|
||||
|
||||
# Application URLs
|
||||
TEST_APP_URL=http://localhost:3000
|
||||
|
||||
# Test Database (if using separate test DB)
|
||||
TEST_SURREALDB_URL=ws://localhost:8000/rpc
|
||||
TEST_SURREALDB_NS=test
|
||||
TEST_SURREALDB_DB=ponderants_test
|
||||
TEST_SURREALDB_USER=root
|
||||
TEST_SURREALDB_PASS=root
|
||||
|
||||
# API Keys for Testing (optional - can use same as dev)
|
||||
# TEST_GOOGLE_GENERATIVE_AI_API_KEY=your-test-api-key
|
||||
# TEST_DEEPGRAM_API_KEY=your-test-api-key
|
||||
1
.gitignore
vendored
@@ -27,6 +27,7 @@ yarn-error.log*
|
||||
# local env files
|
||||
.env
|
||||
.env*.local
|
||||
.env.test
|
||||
|
||||
# vercel
|
||||
.vercel
|
||||
|
||||
BIN
.playwright-mcp/chat-after-send.png
Normal file
|
After Width: | Height: | Size: 18 KiB |
BIN
.playwright-mcp/chat-after-wait.png
Normal file
|
After Width: | Height: | Size: 18 KiB |
BIN
.playwright-mcp/chat-desktop-fixed.png
Normal file
|
After Width: | Height: | Size: 78 KiB |
BIN
.playwright-mcp/chat-desktop.png
Normal file
|
After Width: | Height: | Size: 44 KiB |
BIN
.playwright-mcp/chat-initial-state.png
Normal file
|
After Width: | Height: | Size: 16 KiB |
BIN
.playwright-mcp/chat-mobile-fixed.png
Normal file
|
After Width: | Height: | Size: 42 KiB |
BIN
.playwright-mcp/chat-mobile.png
Normal file
|
After Width: | Height: | Size: 48 KiB |
BIN
.playwright-mcp/chat-page-with-greeting.png
Normal file
|
After Width: | Height: | Size: 18 KiB |
BIN
.playwright-mcp/chat-refreshed.png
Normal file
|
After Width: | Height: | Size: 18 KiB |
BIN
.playwright-mcp/chat-with-greeting.png
Normal file
|
After Width: | Height: | Size: 49 KiB |
BIN
.playwright-mcp/galaxy-page-debug.png
Normal file
|
After Width: | Height: | Size: 9.3 KiB |
BIN
.playwright-mcp/galaxy-view-navigation.png
Normal file
|
After Width: | Height: | Size: 18 KiB |
BIN
.playwright-mcp/galaxy-view-test.png
Normal file
|
After Width: | Height: | Size: 22 KiB |
BIN
.playwright-mcp/login-page.png
Normal file
|
After Width: | Height: | Size: 29 KiB |
BIN
.playwright-mcp/navigation-sidebar.png
Normal file
|
After Width: | Height: | Size: 18 KiB |
BIN
.playwright-mcp/root-page-test.png
Normal file
|
After Width: | Height: | Size: 2.3 KiB |
BIN
.playwright-mcp/typing-indicator-test.png
Normal file
|
After Width: | Height: | Size: 48 KiB |
@@ -11,6 +11,12 @@ EOF
|
||||
)"
|
||||
```
|
||||
|
||||
**Test Credentials**: For testing and development, use the test Bluesky account credentials stored in the .env file:
|
||||
- Handle: TEST_BLUESKY_HANDLE (aprongecko.bsky.social)
|
||||
- Password: TEST_BLUESKY_PASSWORD (Candles1)
|
||||
|
||||
These credentials should be used for all automated testing (Magnitude, Playwright) and manual testing when needed. Do not attempt to authenticate without using these credentials.
|
||||
|
||||
You are an expert-level, full-stack AI coding agent. Your task is to implement
|
||||
the "Ponderants" application. Product Vision: Ponderants is an AI-powered
|
||||
thought partner that interviews a user to capture, structure, and visualize
|
||||
|
||||
62
SECURITY.md
Normal file
@@ -0,0 +1,62 @@
|
||||
# Security Considerations
|
||||
|
||||
## 🚨 KNOWN SECURITY ISSUES
|
||||
|
||||
### Voice Transcription API Key Exposure
|
||||
|
||||
**Status:** Known issue - needs fix before production
|
||||
|
||||
**Issue:** The Deepgram API key is currently exposed to the frontend when users click the microphone button for voice transcription.
|
||||
|
||||
**Location:** `app/api/voice-token/route.ts`
|
||||
|
||||
**Risk:** Users with browser dev tools can extract the API key and use it for their own purposes, potentially incurring charges or exhausting API quotas.
|
||||
|
||||
**Why this exists:**
|
||||
- Temporary tokens from `deepgram.auth.grantToken()` fail with WebSocket connections
|
||||
- Direct API key usage is currently the only working approach for client-side WebSocket transcription
|
||||
|
||||
**Temporary mitigations:**
|
||||
- API key only exposed when user actively requests voice transcription
|
||||
- Usage can be monitored through Deepgram dashboard
|
||||
- Can implement rate limiting on the `/api/voice-token` endpoint
|
||||
|
||||
**Proper fix options:**
|
||||
1. **Server-side proxy (recommended):**
|
||||
- Implement a WebSocket proxy server that handles Deepgram communication
|
||||
- Client connects to our proxy, proxy forwards to Deepgram with API key
|
||||
- Requires stateful server infrastructure (not serverless)
|
||||
|
||||
2. **Usage-limited keys:**
|
||||
- Use Deepgram API keys with strict usage limits
|
||||
- Rotate keys frequently
|
||||
- Implement server-side rate limiting per user
|
||||
|
||||
3. **Alternative transcription approach:**
|
||||
- Record audio client-side
|
||||
- Send audio files to server endpoint
|
||||
- Server transcribes using Deepgram API
|
||||
- Less real-time but more secure
|
||||
|
||||
**Action Required:** Choose and implement one of the above solutions before production deployment.
|
||||
|
||||
---
|
||||
|
||||
## Other Security Best Practices
|
||||
|
||||
### Environment Variables
|
||||
All sensitive credentials must be stored in `.env` and never committed to git:
|
||||
- `DEEPGRAM_API_KEY`
|
||||
- `GOOGLE_GENERATIVE_AI_API_KEY`
|
||||
- `SURREAL_JWT_SECRET`
|
||||
- Database credentials
|
||||
|
||||
### Authentication
|
||||
- JWT tokens stored in httpOnly cookies
|
||||
- SurrealDB permission system enforces data access controls
|
||||
- OAuth flow validates user identity through ATproto
|
||||
|
||||
### Input Validation
|
||||
- All API endpoints validate inputs server-side
|
||||
- AI-generated content is sanitized before display
|
||||
- GraphQL queries use parameterized inputs
|
||||
@@ -100,9 +100,11 @@ export async function GET(request: NextRequest) {
|
||||
// Parse custom state to determine redirect URL
|
||||
let returnTo = '/chat';
|
||||
try {
|
||||
const customState = JSON.parse(state);
|
||||
if (customState.returnTo) {
|
||||
returnTo = customState.returnTo;
|
||||
if (state) {
|
||||
const customState = JSON.parse(state);
|
||||
if (customState.returnTo) {
|
||||
returnTo = customState.returnTo;
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Invalid state JSON, use default
|
||||
|
||||
@@ -50,7 +50,7 @@ export async function POST(request: NextRequest) {
|
||||
|
||||
if (error instanceof z.ZodError) {
|
||||
return NextResponse.json(
|
||||
{ error: 'Invalid request', details: error.errors },
|
||||
{ error: 'Invalid request', details: error.issues },
|
||||
{ status: 400 }
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import { NextRequest, NextResponse } from 'next/server';
|
||||
import { cookies } from 'next/headers';
|
||||
import { UMAP } from 'umap-js';
|
||||
import { connectToDB } from '@/lib/db';
|
||||
import { verifySurrealJwt } from '@/lib/auth/jwt';
|
||||
|
||||
/**
|
||||
* POST /api/calculate-graph
|
||||
@@ -19,28 +21,16 @@ export async function POST(request: NextRequest) {
|
||||
return NextResponse.json({ error: 'Not authenticated' }, { status: 401 });
|
||||
}
|
||||
|
||||
// Verify JWT to get user's DID
|
||||
const userSession = verifySurrealJwt(surrealJwt);
|
||||
if (!userSession) {
|
||||
return NextResponse.json({ error: 'Invalid auth token' }, { status: 401 });
|
||||
}
|
||||
|
||||
const { did: userDid } = userSession;
|
||||
|
||||
try {
|
||||
// NOTE: For the hackathon, we use root credentials instead of JWT auth for simplicity.
|
||||
// In production, this should use user-scoped authentication with proper SCOPE configuration.
|
||||
const db = new (await import('surrealdb')).default();
|
||||
await db.connect(process.env.SURREALDB_URL!);
|
||||
await db.signin({
|
||||
username: process.env.SURREALDB_USER!,
|
||||
password: process.env.SURREALDB_PASS!,
|
||||
});
|
||||
await db.use({
|
||||
namespace: process.env.SURREALDB_NS!,
|
||||
database: process.env.SURREALDB_DB!,
|
||||
});
|
||||
|
||||
// Get the user's DID from the JWT to filter nodes
|
||||
const jwt = require('jsonwebtoken');
|
||||
const decoded = jwt.decode(surrealJwt) as { did: string };
|
||||
const userDid = decoded?.did;
|
||||
|
||||
if (!userDid) {
|
||||
return NextResponse.json({ error: 'Invalid authentication token' }, { status: 401 });
|
||||
}
|
||||
const db = await connectToDB();
|
||||
|
||||
// 1. Fetch all nodes that have an embedding but no coords_3d (filtered by user_did)
|
||||
const query = `SELECT id, embedding FROM node WHERE user_did = $userDid AND embedding != NONE AND coords_3d = NONE`;
|
||||
|
||||
@@ -53,10 +53,26 @@ For all other conversation, just respond as a helpful AI.`;
|
||||
messages: convertToModelMessages(messages),
|
||||
|
||||
// Provide the schema as a 'tool' to the model
|
||||
// Tools in AI SDK v5 use inputSchema instead of parameters
|
||||
tools: {
|
||||
suggest_node: {
|
||||
description: 'Suggest a new thought node when an idea is complete.',
|
||||
schema: NodeSuggestionSchema,
|
||||
inputSchema: z.object({
|
||||
title: z
|
||||
.string()
|
||||
.describe('A concise, descriptive title for the thought node.'),
|
||||
content: z
|
||||
.string()
|
||||
.describe('The full, well-structured content of the thought node.'),
|
||||
tags: z
|
||||
.array(z.string())
|
||||
.optional()
|
||||
.describe('Optional tags for categorizing the node.'),
|
||||
}),
|
||||
execute: async ({ title, content, tags }) => ({
|
||||
success: true,
|
||||
suggestion: { title, content, tags },
|
||||
}),
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
62
app/api/debug/nodes/route.ts
Normal file
@@ -0,0 +1,62 @@
|
||||
import { NextRequest, NextResponse } from 'next/server';
|
||||
import { cookies } from 'next/headers';
|
||||
import { connectToDB } from '@/lib/db';
|
||||
import { verifySurrealJwt } from '@/lib/auth/jwt';
|
||||
|
||||
/**
|
||||
* GET /api/debug/nodes
|
||||
*
|
||||
* Debug route to inspect node storage
|
||||
*/
|
||||
export async function GET(request: NextRequest) {
|
||||
const cookieStore = await cookies();
|
||||
const surrealJwt = cookieStore.get('ponderants-auth')?.value;
|
||||
|
||||
if (!surrealJwt) {
|
||||
return NextResponse.json({ error: 'Not authenticated' }, { status: 401 });
|
||||
}
|
||||
|
||||
const userSession = verifySurrealJwt(surrealJwt);
|
||||
if (!userSession) {
|
||||
return NextResponse.json({ error: 'Invalid auth token' }, { status: 401 });
|
||||
}
|
||||
|
||||
const { did: userDid } = userSession;
|
||||
|
||||
try {
|
||||
const db = await connectToDB();
|
||||
|
||||
// Get all nodes for this user
|
||||
const nodesQuery = `
|
||||
SELECT id, title, body, atp_uri, embedding, coords_3d
|
||||
FROM node
|
||||
WHERE user_did = $userDid
|
||||
`;
|
||||
const results = await db.query(nodesQuery, { userDid });
|
||||
const nodes = results[0] || [];
|
||||
|
||||
// Count stats
|
||||
const stats = {
|
||||
total: nodes.length,
|
||||
with_embeddings: nodes.filter((n: any) => n.embedding).length,
|
||||
with_coords: nodes.filter((n: any) => n.coords_3d).length,
|
||||
without_embeddings: nodes.filter((n: any) => !n.embedding).length,
|
||||
without_coords: nodes.filter((n: any) => !n.coords_3d).length,
|
||||
};
|
||||
|
||||
return NextResponse.json({
|
||||
stats,
|
||||
nodes: nodes.map((n: any) => ({
|
||||
id: n.id,
|
||||
title: n.title,
|
||||
atp_uri: n.atp_uri,
|
||||
has_embedding: !!n.embedding,
|
||||
has_coords: !!n.coords_3d,
|
||||
coords_3d: n.coords_3d,
|
||||
})),
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('[Debug Nodes] Error:', error);
|
||||
return NextResponse.json({ error: String(error) }, { status: 500 });
|
||||
}
|
||||
}
|
||||
105
app/api/galaxy/route.ts
Normal file
@@ -0,0 +1,105 @@
|
||||
import { NextRequest, NextResponse } from 'next/server';
|
||||
import { cookies } from 'next/headers';
|
||||
import { connectToDB } from '@/lib/db';
|
||||
import { verifySurrealJwt } from '@/lib/auth/jwt';
|
||||
|
||||
interface NodeData {
|
||||
id: string;
|
||||
title: string;
|
||||
coords_3d: [number, number, number];
|
||||
}
|
||||
|
||||
interface LinkData {
|
||||
in: string;
|
||||
out: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* GET /api/galaxy
|
||||
*
|
||||
* Fetches nodes with 3D coordinates and their links for visualization.
|
||||
* Automatically triggers graph calculation if needed.
|
||||
*/
|
||||
export async function GET(request: NextRequest) {
|
||||
const cookieStore = await cookies();
|
||||
const surrealJwt = cookieStore.get('ponderants-auth')?.value;
|
||||
|
||||
if (!surrealJwt) {
|
||||
return NextResponse.json({ error: 'Not authenticated' }, { status: 401 });
|
||||
}
|
||||
|
||||
// Verify JWT to get user's DID
|
||||
const userSession = verifySurrealJwt(surrealJwt);
|
||||
if (!userSession) {
|
||||
return NextResponse.json({ error: 'Invalid auth token' }, { status: 401 });
|
||||
}
|
||||
|
||||
const { did: userDid } = userSession;
|
||||
|
||||
try {
|
||||
const db = await connectToDB();
|
||||
|
||||
// Fetch nodes that have 3D coordinates
|
||||
const nodesQuery = `
|
||||
SELECT id, title, coords_3d
|
||||
FROM node
|
||||
WHERE user_did = $userDid AND coords_3d != NONE
|
||||
`;
|
||||
const nodeResults = await db.query<[NodeData[]]>(nodesQuery, { userDid });
|
||||
const nodes = nodeResults[0] || [];
|
||||
|
||||
// Fetch links between nodes
|
||||
const linksQuery = `
|
||||
SELECT in, out
|
||||
FROM links_to
|
||||
`;
|
||||
const linkResults = await db.query<[LinkData[]]>(linksQuery);
|
||||
const links = linkResults[0] || [];
|
||||
|
||||
// If we have nodes but no coordinates, check if we should calculate
|
||||
if (nodes.length === 0) {
|
||||
// Check if we have nodes with embeddings but no coordinates
|
||||
const unmappedQuery = `
|
||||
SELECT count() as count
|
||||
FROM node
|
||||
WHERE user_did = $userDid AND embedding != NONE AND coords_3d = NONE
|
||||
GROUP ALL
|
||||
`;
|
||||
const unmappedResults = await db.query<[Array<{ count: number }>]>(unmappedQuery, { userDid });
|
||||
const unmappedCount = unmappedResults[0]?.[0]?.count || 0;
|
||||
|
||||
if (unmappedCount >= 3) {
|
||||
console.log(`[Galaxy API] Found ${unmappedCount} unmapped nodes, triggering calculation...`);
|
||||
|
||||
// Trigger graph calculation (don't await, return current state)
|
||||
fetch(`${process.env.NEXT_PUBLIC_BASE_URL || 'http://localhost:3000'}/api/calculate-graph`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Cookie': `ponderants-auth=${surrealJwt}`,
|
||||
},
|
||||
}).catch((err) => {
|
||||
console.error('[Galaxy API] Failed to trigger graph calculation:', err);
|
||||
});
|
||||
|
||||
return NextResponse.json({
|
||||
nodes: [],
|
||||
links: [],
|
||||
message: 'Calculating 3D coordinates... Refresh in a moment.',
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`[Galaxy API] Returning ${nodes.length} nodes and ${links.length} links`);
|
||||
|
||||
return NextResponse.json({
|
||||
nodes,
|
||||
links,
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('[Galaxy API] Error:', error);
|
||||
return NextResponse.json(
|
||||
{ error: 'Failed to fetch galaxy data' },
|
||||
{ status: 500 }
|
||||
);
|
||||
}
|
||||
}
|
||||
114
app/api/generate-node-draft/route.ts
Normal file
@@ -0,0 +1,114 @@
|
||||
/**
|
||||
* API Route: Generate Node Draft
|
||||
*
|
||||
* Takes a conversation history and uses AI to generate a structured node draft
|
||||
* with title and content that captures the key insights from the conversation.
|
||||
*/
|
||||
|
||||
import { google } from '@ai-sdk/google';
|
||||
import { generateText } from 'ai';
|
||||
import { NextRequest, NextResponse } from 'next/server';
|
||||
|
||||
const model = google('gemini-2.0-flash-exp');
|
||||
|
||||
export async function POST(request: NextRequest) {
|
||||
try {
|
||||
const { messages } = await request.json();
|
||||
|
||||
if (!Array.isArray(messages) || messages.length === 0) {
|
||||
return NextResponse.json(
|
||||
{ error: 'Invalid or empty conversation' },
|
||||
{ status: 400 }
|
||||
);
|
||||
}
|
||||
|
||||
// Format conversation for the AI
|
||||
const conversationText = messages
|
||||
.map((m: any) => {
|
||||
const role = m.role === 'user' ? 'User' : 'AI';
|
||||
let content = '';
|
||||
|
||||
if ('parts' in m && Array.isArray(m.parts)) {
|
||||
const textParts = m.parts.filter((p: any) => p.type === 'text');
|
||||
content = textParts.map((p: any) => p.text).join('\n');
|
||||
} else if (m.content) {
|
||||
content = m.content;
|
||||
}
|
||||
|
||||
return `${role}: ${content}`;
|
||||
})
|
||||
.join('\n\n');
|
||||
|
||||
// Generate node draft using AI
|
||||
const result = await generateText({
|
||||
model,
|
||||
prompt: `You are helping a user capture their thoughts as a structured "Node" - a mini blog post.
|
||||
|
||||
Analyze the following conversation and create a Node draft that:
|
||||
1. Captures the core insight or topic discussed
|
||||
2. Structures the content coherently
|
||||
3. Preserves the user's voice and key ideas
|
||||
4. Focuses on the most important takeaways
|
||||
|
||||
Conversation:
|
||||
${conversationText}
|
||||
|
||||
Respond with a JSON object containing:
|
||||
- title: A concise, compelling title (3-8 words)
|
||||
- content: The main body in markdown format (200-500 words, use headings/lists where appropriate)
|
||||
|
||||
Format your response as valid JSON only, no additional text.`,
|
||||
});
|
||||
|
||||
// Parse the AI response
|
||||
let draft;
|
||||
try {
|
||||
draft = JSON.parse(result.text);
|
||||
} catch (e) {
|
||||
// If JSON parsing fails, try to extract from markdown code block
|
||||
const jsonMatch = result.text.match(/```json\s*([\s\S]*?)\s*```/);
|
||||
if (jsonMatch) {
|
||||
draft = JSON.parse(jsonMatch[1]);
|
||||
} else {
|
||||
throw new Error('Failed to parse AI response as JSON');
|
||||
}
|
||||
}
|
||||
|
||||
// Validate the draft structure
|
||||
if (!draft.title || !draft.content) {
|
||||
throw new Error('Generated draft missing required fields');
|
||||
}
|
||||
|
||||
// Add conversation context (last 3 messages for reference)
|
||||
const contextMessages = messages.slice(-3);
|
||||
const conversationContext = contextMessages
|
||||
.map((m: any) => {
|
||||
const role = m.role === 'user' ? 'User' : 'AI';
|
||||
let content = '';
|
||||
|
||||
if ('parts' in m && Array.isArray(m.parts)) {
|
||||
const textParts = m.parts.filter((p: any) => p.type === 'text');
|
||||
content = textParts.map((p: any) => p.text).join('\n');
|
||||
} else if (m.content) {
|
||||
content = m.content;
|
||||
}
|
||||
|
||||
return `${role}: ${content}`;
|
||||
})
|
||||
.join('\n\n');
|
||||
|
||||
return NextResponse.json({
|
||||
draft: {
|
||||
title: draft.title,
|
||||
content: draft.content,
|
||||
conversationContext,
|
||||
},
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('[Generate Node Draft] Error:', error);
|
||||
return NextResponse.json(
|
||||
{ error: error instanceof Error ? error.message : 'Failed to generate node draft' },
|
||||
{ status: 500 }
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -1,27 +1,35 @@
|
||||
import { NextRequest, NextResponse } from 'next/server';
|
||||
import { cookies } from 'next/headers';
|
||||
import { AtpAgent, RichText } from '@atproto/api';
|
||||
import { RichText, Agent } from '@atproto/api';
|
||||
import { connectToDB } from '@/lib/db';
|
||||
import { generateEmbedding } from '@/lib/ai';
|
||||
import { verifySurrealJwt } from '@/lib/auth/jwt';
|
||||
import { getOAuthClient } from '@/lib/auth/oauth-client';
|
||||
|
||||
export async function POST(request: NextRequest) {
|
||||
const cookieStore = await cookies();
|
||||
const surrealJwt = cookieStore.get('ponderants-auth')?.value;
|
||||
const atpAccessToken = cookieStore.get('atproto_access_token')?.value;
|
||||
|
||||
if (!surrealJwt || !atpAccessToken) {
|
||||
console.log('[POST /api/nodes] Auth check:', {
|
||||
hasSurrealJwt: !!surrealJwt,
|
||||
});
|
||||
|
||||
if (!surrealJwt) {
|
||||
console.error('[POST /api/nodes] Missing auth cookie');
|
||||
return NextResponse.json({ error: 'Not authenticated' }, { status: 401 });
|
||||
}
|
||||
|
||||
// Verify the JWT and extract user info
|
||||
const userSession = verifySurrealJwt(surrealJwt);
|
||||
if (!userSession) {
|
||||
console.error('[POST /api/nodes] Invalid JWT');
|
||||
return NextResponse.json({ error: 'Invalid auth token' }, { status: 401 });
|
||||
}
|
||||
|
||||
const { did: userDid } = userSession;
|
||||
|
||||
console.log('[POST /api/nodes] Verified user DID:', userDid);
|
||||
|
||||
const { title, body, links } = (await request.json()) as {
|
||||
title: string;
|
||||
body: string;
|
||||
@@ -39,67 +47,95 @@ export async function POST(request: NextRequest) {
|
||||
let atp_cid: string;
|
||||
|
||||
try {
|
||||
// Get the PDS URL from environment or use default
|
||||
const pdsUrl = process.env.BLUESKY_PDS_URL || 'https://bsky.social';
|
||||
const agent = new AtpAgent({ service: pdsUrl });
|
||||
// Get the OAuth client and restore the user's session
|
||||
const client = await getOAuthClient();
|
||||
console.log('[POST /api/nodes] Got OAuth client, attempting to restore session for DID:', userDid);
|
||||
|
||||
// Resume the session with the access token
|
||||
await agent.resumeSession({
|
||||
accessJwt: atpAccessToken,
|
||||
refreshJwt: '', // We don't need refresh for this operation
|
||||
did: userDid,
|
||||
handle: userSession.handle,
|
||||
});
|
||||
// Restore the session - returns an OAuthSession object directly
|
||||
const session = await client.restore(userDid);
|
||||
|
||||
// Format the body as RichText to detect links, mentions, etc.
|
||||
const rt = new RichText({ text: body });
|
||||
// Create an Agent from the session
|
||||
const agent = new Agent(session);
|
||||
|
||||
console.log('[POST /api/nodes] Successfully restored OAuth session and created agent');
|
||||
|
||||
// Bluesky posts are limited to 300 graphemes
|
||||
// Format a concise post with title and truncated body
|
||||
const maxLength = 280; // Leave room for ellipsis
|
||||
const fullText = `${title}\n\n${body}`;
|
||||
|
||||
let postText: string;
|
||||
if (fullText.length <= maxLength) {
|
||||
postText = fullText;
|
||||
} else {
|
||||
// Truncate at word boundary
|
||||
const truncated = fullText.substring(0, maxLength);
|
||||
const lastSpace = truncated.lastIndexOf(' ');
|
||||
postText = truncated.substring(0, lastSpace > 0 ? lastSpace : maxLength) + '...';
|
||||
}
|
||||
|
||||
// Format the text as RichText to detect links, mentions, etc.
|
||||
const rt = new RichText({ text: postText });
|
||||
await rt.detectFacets(agent);
|
||||
|
||||
// Create the ATproto record
|
||||
// Create the ATproto record using standard Bluesky post collection
|
||||
// This works with OAuth scope 'atproto' without requiring granular permissions
|
||||
const response = await agent.api.com.atproto.repo.createRecord({
|
||||
repo: userDid,
|
||||
collection: 'com.ponderants.node',
|
||||
collection: 'app.bsky.feed.post',
|
||||
record: {
|
||||
$type: 'com.ponderants.node',
|
||||
title,
|
||||
body: rt.text,
|
||||
$type: 'app.bsky.feed.post',
|
||||
text: rt.text,
|
||||
facets: rt.facets,
|
||||
links: links || [],
|
||||
createdAt,
|
||||
// Add a tag to identify this as a Ponderants node
|
||||
tags: ['ponderants-node'],
|
||||
},
|
||||
});
|
||||
|
||||
atp_uri = response.uri;
|
||||
atp_cid = response.cid;
|
||||
atp_uri = response.data.uri;
|
||||
atp_cid = response.data.cid;
|
||||
|
||||
console.log('[POST /api/nodes] ✓ Published to ATproto PDS as standard post:', atp_uri);
|
||||
} catch (error) {
|
||||
console.error('ATproto write error:', error);
|
||||
console.error('[POST /api/nodes] ATproto write error:', error);
|
||||
return NextResponse.json({ error: 'Failed to publish to PDS' }, { status: 500 });
|
||||
}
|
||||
|
||||
// --- Step 2: Generate AI Embedding (Cache) ---
|
||||
let embedding: number[];
|
||||
// Embeddings are optional - used for vector search and 3D visualization
|
||||
let embedding: number[] | undefined;
|
||||
try {
|
||||
embedding = await generateEmbedding(title + '\n' + body);
|
||||
console.log('[POST /api/nodes] ✓ Generated embedding vector');
|
||||
} catch (error) {
|
||||
console.error('Embedding error:', error);
|
||||
return NextResponse.json({ error: 'Failed to generate embedding' }, { status: 500 });
|
||||
console.warn('[POST /api/nodes] ⚠ Embedding generation failed (non-critical):', error);
|
||||
// Continue without embedding - it's only needed for advanced features
|
||||
embedding = undefined;
|
||||
}
|
||||
|
||||
// --- Step 3: Write to App View Cache (SurrealDB) ---
|
||||
// The cache is optional - the ATproto PDS is the source of truth
|
||||
try {
|
||||
const db = await connectToDB(surrealJwt);
|
||||
const db = await connectToDB();
|
||||
|
||||
// Create the node record in our cache.
|
||||
// The `user_did` field is set, satisfying the 'PERMISSIONS'
|
||||
// clause defined in the schema.
|
||||
const newNode = await db.create('node', {
|
||||
const nodeData: any = {
|
||||
user_did: userDid,
|
||||
atp_uri: atp_uri,
|
||||
title: title,
|
||||
body: body, // Store the raw text body
|
||||
embedding: embedding,
|
||||
// coords_3d will be calculated later by UMAP
|
||||
});
|
||||
};
|
||||
|
||||
// Only include embedding if it was successfully generated
|
||||
if (embedding) {
|
||||
nodeData.embedding = embedding;
|
||||
}
|
||||
|
||||
const newNode = await db.create('node', nodeData);
|
||||
|
||||
// Handle linking
|
||||
if (links && links.length > 0) {
|
||||
@@ -120,11 +156,16 @@ export async function POST(request: NextRequest) {
|
||||
}
|
||||
}
|
||||
|
||||
return NextResponse.json(newNode);
|
||||
console.log('[POST /api/nodes] ✓ Cached node in SurrealDB');
|
||||
return NextResponse.json({ success: true, atp_uri, node: newNode });
|
||||
} catch (error) {
|
||||
console.error('SurrealDB write error:', error);
|
||||
// TODO: Implement rollback for the ATproto post?
|
||||
// This is a known limitation of the write-through cache pattern.
|
||||
return NextResponse.json({ error: 'Failed to save to app cache' }, { status: 500 });
|
||||
console.warn('[POST /api/nodes] ⚠ SurrealDB cache write failed (non-critical):', error);
|
||||
// The node was successfully published to ATproto (source of truth)
|
||||
// Cache failure is non-critical - advanced features may be unavailable
|
||||
return NextResponse.json({
|
||||
success: true,
|
||||
atp_uri,
|
||||
warning: 'Node published to Bluesky, but cache update failed. Advanced features may be unavailable.',
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ import { NextRequest, NextResponse } from 'next/server';
|
||||
import { cookies } from 'next/headers';
|
||||
import { connectToDB } from '@/lib/db';
|
||||
import { generateEmbedding } from '@/lib/ai';
|
||||
import { verifySurrealJwt } from '@/lib/auth/jwt';
|
||||
|
||||
/**
|
||||
* POST /api/suggest-links
|
||||
@@ -18,6 +19,14 @@ export async function POST(request: NextRequest) {
|
||||
return NextResponse.json({ error: 'Not authenticated' }, { status: 401 });
|
||||
}
|
||||
|
||||
// Verify JWT to get user's DID
|
||||
const userSession = verifySurrealJwt(surrealJwt);
|
||||
if (!userSession) {
|
||||
return NextResponse.json({ error: 'Invalid auth token' }, { status: 401 });
|
||||
}
|
||||
|
||||
const { did: userDid } = userSession;
|
||||
|
||||
const { body } = (await request.json()) as { body: string };
|
||||
|
||||
if (!body) {
|
||||
@@ -28,15 +37,13 @@ export async function POST(request: NextRequest) {
|
||||
// 1. Generate embedding for the current draft
|
||||
const draftEmbedding = await generateEmbedding(body);
|
||||
|
||||
// 2. Connect to DB (as the user)
|
||||
// This enforces row-level security - user can only search their own nodes
|
||||
const db = await connectToDB(surrealJwt);
|
||||
// 2. Connect to DB with root credentials
|
||||
const db = await connectToDB();
|
||||
|
||||
// 3. Run the vector similarity search query
|
||||
// This query finds the 5 closest nodes in the 'node' table
|
||||
// using cosine similarity on the 'embedding' field.
|
||||
// It only searches nodes WHERE user_did = $token.did,
|
||||
// which is enforced by the table's PERMISSIONS.
|
||||
// We filter by user_did to ensure users only see their own nodes.
|
||||
const query = `
|
||||
SELECT
|
||||
id,
|
||||
@@ -45,6 +52,7 @@ export async function POST(request: NextRequest) {
|
||||
atp_uri,
|
||||
vector::similarity::cosine(embedding, $draft_embedding) AS score
|
||||
FROM node
|
||||
WHERE user_did = $user_did
|
||||
ORDER BY score DESC
|
||||
LIMIT 5;
|
||||
`;
|
||||
@@ -57,6 +65,7 @@ export async function POST(request: NextRequest) {
|
||||
score: number;
|
||||
}>]>(query, {
|
||||
draft_embedding: draftEmbedding,
|
||||
user_did: userDid,
|
||||
});
|
||||
|
||||
// The query returns an array of result sets. We want the first one.
|
||||
|
||||
83
app/api/tts/route.ts
Normal file
@@ -0,0 +1,83 @@
|
||||
import { NextRequest, NextResponse } from 'next/server';
|
||||
import { createClient } from '@deepgram/sdk';
|
||||
|
||||
/**
|
||||
* Text-to-Speech API route using Deepgram Aura
|
||||
*
|
||||
* Converts text to natural-sounding speech using Deepgram's Aura-2 model.
|
||||
* Returns audio data that can be played in the browser.
|
||||
*/
|
||||
export async function POST(request: NextRequest) {
|
||||
const deepgramApiKey = process.env.DEEPGRAM_API_KEY;
|
||||
|
||||
if (!deepgramApiKey) {
|
||||
return NextResponse.json(
|
||||
{ error: 'Deepgram API key not configured' },
|
||||
{ status: 500 }
|
||||
);
|
||||
}
|
||||
|
||||
try {
|
||||
const { text } = await request.json();
|
||||
|
||||
if (!text || typeof text !== 'string') {
|
||||
return NextResponse.json(
|
||||
{ error: 'Text parameter is required' },
|
||||
{ status: 400 }
|
||||
);
|
||||
}
|
||||
|
||||
console.log('[TTS] Generating speech for text:', text.substring(0, 50) + '...');
|
||||
|
||||
const deepgram = createClient(deepgramApiKey);
|
||||
|
||||
// Generate speech using Deepgram Aura
|
||||
const response = await deepgram.speak.request(
|
||||
{ text },
|
||||
{
|
||||
model: 'aura-2-thalia-en', // Natural female voice
|
||||
encoding: 'linear16',
|
||||
container: 'wav',
|
||||
}
|
||||
);
|
||||
|
||||
// Get the audio stream
|
||||
const stream = await response.getStream();
|
||||
|
||||
if (!stream) {
|
||||
throw new Error('No audio stream returned from Deepgram');
|
||||
}
|
||||
|
||||
// Convert stream to buffer
|
||||
const chunks: Uint8Array[] = [];
|
||||
const reader = stream.getReader();
|
||||
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
if (value) chunks.push(value);
|
||||
}
|
||||
} finally {
|
||||
reader.releaseLock();
|
||||
}
|
||||
|
||||
const buffer = Buffer.concat(chunks);
|
||||
|
||||
console.log('[TTS] ✓ Generated', buffer.length, 'bytes of audio');
|
||||
|
||||
// Return audio with proper headers
|
||||
return new NextResponse(buffer, {
|
||||
headers: {
|
||||
'Content-Type': 'audio/wav',
|
||||
'Content-Length': buffer.length.toString(),
|
||||
},
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('[TTS] Error generating speech:', error);
|
||||
return NextResponse.json(
|
||||
{ error: 'Failed to generate speech' },
|
||||
{ status: 500 }
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -12,25 +12,116 @@ import {
|
||||
Group,
|
||||
Text,
|
||||
Loader,
|
||||
ActionIcon,
|
||||
Tooltip,
|
||||
} from '@mantine/core';
|
||||
import { useRef, useState, useEffect } from 'react';
|
||||
import { MicrophoneRecorder } from '@/components/MicrophoneRecorder';
|
||||
import { useRef, useEffect, useState } from 'react';
|
||||
import { IconVolume, IconMicrophone, IconNotes } from '@tabler/icons-react';
|
||||
import { UserMenu } from '@/components/UserMenu';
|
||||
import { useVoiceMode } from '@/hooks/useVoiceMode';
|
||||
import { useAppMachine } from '@/hooks/useAppMachine';
|
||||
import { notifications } from '@mantine/notifications';
|
||||
import { useMediaQuery } from '@mantine/hooks';
|
||||
|
||||
/**
|
||||
* Get the voice button text based on the current state
|
||||
*/
|
||||
function getVoiceButtonText(state: any): string {
|
||||
if (state.matches('idle')) {
|
||||
return 'Start Voice Conversation';
|
||||
} else if (state.matches('checkingForGreeting')) {
|
||||
return 'Checking for greeting...';
|
||||
} else if (state.matches('listening')) {
|
||||
return 'Listening... Start speaking';
|
||||
} else if (state.matches('userSpeaking')) {
|
||||
return 'Speaking... (will auto-submit after 3s silence)';
|
||||
} else if (state.matches('timingOut')) {
|
||||
return 'Speaking... (auto-submits soon)';
|
||||
} else if (state.matches('submittingUser')) {
|
||||
return 'Submitting...';
|
||||
} else if (state.matches('waitingForAI')) {
|
||||
return 'Waiting for AI...';
|
||||
} else if (state.matches('generatingTTS')) {
|
||||
return 'Generating speech...';
|
||||
} else if (state.matches('playingTTS')) {
|
||||
return 'AI is speaking...';
|
||||
}
|
||||
return 'Start Voice Conversation';
|
||||
}
|
||||
|
||||
export default function ChatPage() {
|
||||
const viewport = useRef<HTMLDivElement>(null);
|
||||
const { messages, sendMessage, setMessages, status } = useChat();
|
||||
const isMobile = useMediaQuery('(max-width: 768px)');
|
||||
|
||||
// Text input state (managed manually since useChat doesn't provide form helpers)
|
||||
const [input, setInput] = useState('');
|
||||
|
||||
const { messages, sendMessage, setMessages, status } = useChat({
|
||||
api: '/api/chat',
|
||||
body: {
|
||||
persona: 'Socratic',
|
||||
// App machine for navigation
|
||||
const appActor = useAppMachine();
|
||||
|
||||
// State for creating node
|
||||
const [isCreatingNode, setIsCreatingNode] = useState(false);
|
||||
|
||||
// Use the clean voice mode hook
|
||||
const { state, send, transcript, error } = useVoiceMode({
|
||||
messages,
|
||||
status,
|
||||
onSubmit: (text: string) => {
|
||||
sendMessage({ text });
|
||||
},
|
||||
credentials: 'include',
|
||||
});
|
||||
|
||||
// Handler to create node from conversation
|
||||
const handleCreateNode = async () => {
|
||||
if (messages.length === 0) {
|
||||
notifications.show({
|
||||
title: 'No conversation',
|
||||
message: 'Start a conversation before creating a node',
|
||||
color: 'red',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
setIsCreatingNode(true);
|
||||
|
||||
try {
|
||||
const response = await fetch('/api/generate-node-draft', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
credentials: 'include', // Include cookies for authentication
|
||||
body: JSON.stringify({ messages }),
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
const errorData = await response.json();
|
||||
throw new Error(errorData.error || 'Failed to generate node draft');
|
||||
}
|
||||
|
||||
const { draft } = await response.json();
|
||||
|
||||
// Transition to edit mode with the draft
|
||||
appActor.send({
|
||||
type: 'CREATE_NODE_FROM_CONVERSATION',
|
||||
draft,
|
||||
});
|
||||
|
||||
notifications.show({
|
||||
title: 'Node draft created',
|
||||
message: 'Review and edit your node before publishing',
|
||||
color: 'green',
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('[Create Node] Error:', error);
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: error instanceof Error ? error.message : 'Failed to create node draft',
|
||||
color: 'red',
|
||||
});
|
||||
} finally {
|
||||
setIsCreatingNode(false);
|
||||
}
|
||||
};
|
||||
|
||||
// Add initial greeting message on first load
|
||||
useEffect(() => {
|
||||
if (messages.length === 0) {
|
||||
@@ -44,7 +135,7 @@ export default function ChatPage() {
|
||||
text: 'Welcome to Ponderants! I\'m here to help you explore and structure your ideas through conversation.\n\nWhat would you like to talk about today? I can adapt my interview style to best suit your needs (Socratic questioning, collaborative brainstorming, or other approaches).\n\nJust start sharing your thoughts, and we\'ll discover meaningful insights together.',
|
||||
},
|
||||
],
|
||||
},
|
||||
} as any,
|
||||
]);
|
||||
}
|
||||
}, []);
|
||||
@@ -57,16 +148,7 @@ export default function ChatPage() {
|
||||
});
|
||||
}, [messages]);
|
||||
|
||||
const handleSubmit = (e: React.FormEvent) => {
|
||||
e.preventDefault();
|
||||
if (!input.trim() || status === 'submitted' || status === 'streaming') return;
|
||||
|
||||
sendMessage({ text: input });
|
||||
setInput('');
|
||||
};
|
||||
|
||||
const handleNewConversation = () => {
|
||||
// Clear all messages and reset to initial greeting
|
||||
setMessages([
|
||||
{
|
||||
id: 'initial-greeting',
|
||||
@@ -77,35 +159,65 @@ export default function ChatPage() {
|
||||
text: 'Welcome to Ponderants! I\'m here to help you explore and structure your ideas through conversation.\n\nWhat would you like to talk about today? I can adapt my interview style to best suit your needs (Socratic questioning, collaborative brainstorming, or other approaches).\n\nJust start sharing your thoughts, and we\'ll discover meaningful insights together.',
|
||||
},
|
||||
],
|
||||
},
|
||||
} as any,
|
||||
]);
|
||||
};
|
||||
|
||||
return (
|
||||
<Container size="md" h="100vh" style={{ display: 'flex', flexDirection: 'column' }}>
|
||||
<Group justify="space-between" py="md">
|
||||
<Title order={2}>
|
||||
Ponderants Interview
|
||||
</Title>
|
||||
<Group gap="md">
|
||||
<Tooltip label="Start a new conversation">
|
||||
<Button
|
||||
variant="subtle"
|
||||
onClick={handleNewConversation}
|
||||
disabled={status === 'submitted' || status === 'streaming'}
|
||||
>
|
||||
New Conversation
|
||||
</Button>
|
||||
</Tooltip>
|
||||
<UserMenu />
|
||||
</Group>
|
||||
</Group>
|
||||
const isVoiceActive = !state.matches('idle');
|
||||
const canSkipAudio = state.hasTag('canSkipAudio');
|
||||
|
||||
<ScrollArea
|
||||
h="100%"
|
||||
style={{ flex: 1 }}
|
||||
viewportRef={viewport}
|
||||
return (
|
||||
<Container size="md" style={{ paddingTop: '80px', paddingBottom: '300px', maxWidth: '100%' }}>
|
||||
{/* Fixed Header */}
|
||||
<Paper
|
||||
withBorder
|
||||
p="md"
|
||||
radius={0}
|
||||
style={{
|
||||
position: 'fixed',
|
||||
top: 0,
|
||||
left: 0,
|
||||
right: 0,
|
||||
zIndex: 50,
|
||||
borderBottom: '1px solid #dee2e6',
|
||||
backgroundColor: '#1a1b1e',
|
||||
}}
|
||||
>
|
||||
<Container size="md">
|
||||
<Group justify="space-between">
|
||||
<Title order={2}>Convo</Title>
|
||||
{!isMobile && (
|
||||
<Group gap="md">
|
||||
<Tooltip label="Generate a node from this conversation">
|
||||
<Button
|
||||
variant="light"
|
||||
color="blue"
|
||||
leftSection={<IconNotes size={18} />}
|
||||
onClick={handleCreateNode}
|
||||
loading={isCreatingNode}
|
||||
disabled={messages.length === 0 || status === 'submitted' || status === 'streaming'}
|
||||
>
|
||||
Create Node
|
||||
</Button>
|
||||
</Tooltip>
|
||||
<Tooltip label="Start a new conversation">
|
||||
<Button
|
||||
variant="subtle"
|
||||
onClick={handleNewConversation}
|
||||
disabled={status === 'submitted' || status === 'streaming'}
|
||||
>
|
||||
New Conversation
|
||||
</Button>
|
||||
</Tooltip>
|
||||
<UserMenu />
|
||||
</Group>
|
||||
)}
|
||||
</Group>
|
||||
</Container>
|
||||
</Paper>
|
||||
|
||||
{/* Scrollable Messages Area */}
|
||||
<ScrollArea h="calc(100vh - 380px)" viewportRef={viewport}>
|
||||
<Stack gap="md" pb="xl">
|
||||
{messages.map((m) => (
|
||||
<Paper
|
||||
@@ -116,117 +228,223 @@ export default function ChatPage() {
|
||||
radius="lg"
|
||||
style={{
|
||||
alignSelf: m.role === 'user' ? 'flex-end' : 'flex-start',
|
||||
backgroundColor:
|
||||
m.role === 'user' ? '#343a40' : '#212529',
|
||||
backgroundColor: m.role === 'user' ? '#343a40' : '#212529',
|
||||
}}
|
||||
w="80%"
|
||||
>
|
||||
<Text fw={700} size="sm">{m.role === 'user' ? 'You' : 'AI'}</Text>
|
||||
{m.parts.map((part, i) => {
|
||||
if (part.type === 'text') {
|
||||
return (
|
||||
<Text key={i} style={{ whiteSpace: 'pre-wrap' }}>
|
||||
{part.text}
|
||||
</Text>
|
||||
);
|
||||
<Text fw={700} size="sm">
|
||||
{m.role === 'user' ? 'You' : 'AI'}
|
||||
</Text>
|
||||
{(() => {
|
||||
if ('parts' in m && Array.isArray((m as any).parts)) {
|
||||
return (m as any).parts.map((part: any, i: number) => {
|
||||
if (part.type === 'text') {
|
||||
return (
|
||||
<Text key={i} style={{ whiteSpace: 'pre-wrap' }}>
|
||||
{part.text}
|
||||
</Text>
|
||||
);
|
||||
}
|
||||
return null;
|
||||
});
|
||||
}
|
||||
|
||||
// Handle tool calls (e.g., suggest_node)
|
||||
if (part.type === 'tool-call') {
|
||||
return (
|
||||
<Paper key={i} withBorder p="xs" mt="xs" bg="dark.6">
|
||||
<Text size="xs" c="dimmed" mb="xs">
|
||||
💡 Node Suggestion
|
||||
</Text>
|
||||
<Text fw={600}>{part.args.title}</Text>
|
||||
<Text size="sm" mt="xs">
|
||||
{part.args.content}
|
||||
</Text>
|
||||
{part.args.tags && part.args.tags.length > 0 && (
|
||||
<Group gap="xs" mt="xs">
|
||||
{part.args.tags.map((tag: string, tagIdx: number) => (
|
||||
<Text key={tagIdx} size="xs" c="blue.4">
|
||||
#{tag}
|
||||
</Text>
|
||||
))}
|
||||
</Group>
|
||||
)}
|
||||
</Paper>
|
||||
);
|
||||
}
|
||||
|
||||
return null;
|
||||
})}
|
||||
return <Text>Message content unavailable</Text>;
|
||||
})()}
|
||||
</Paper>
|
||||
))}
|
||||
|
||||
{/* Typing indicator while AI is generating a response */}
|
||||
{/* Typing indicator */}
|
||||
{(status === 'submitted' || status === 'streaming') && (
|
||||
<Paper
|
||||
withBorder
|
||||
shadow="md"
|
||||
p="sm"
|
||||
radius="lg"
|
||||
style={{
|
||||
alignSelf: 'flex-start',
|
||||
backgroundColor: '#212529',
|
||||
}}
|
||||
style={{ alignSelf: 'flex-start', backgroundColor: '#212529' }}
|
||||
w="80%"
|
||||
>
|
||||
<Text fw={700} size="sm">AI</Text>
|
||||
<Text fw={700} size="sm">
|
||||
AI
|
||||
</Text>
|
||||
<Group gap="xs" mt="xs">
|
||||
<Loader size="xs" />
|
||||
<Text size="sm" c="dimmed">Thinking...</Text>
|
||||
<Text size="sm" c="dimmed">
|
||||
Thinking...
|
||||
</Text>
|
||||
</Group>
|
||||
</Paper>
|
||||
)}
|
||||
|
||||
{/* Show current transcript while speaking */}
|
||||
{transcript && (state.matches('userSpeaking') || state.matches('timingOut')) && (
|
||||
<Paper
|
||||
withBorder
|
||||
shadow="md"
|
||||
p="sm"
|
||||
radius="lg"
|
||||
style={{ alignSelf: 'flex-end', backgroundColor: '#343a40' }}
|
||||
w="80%"
|
||||
>
|
||||
<Text fw={700} size="sm">
|
||||
You (speaking...)
|
||||
</Text>
|
||||
<Text style={{ whiteSpace: 'pre-wrap' }}>{transcript}</Text>
|
||||
</Paper>
|
||||
)}
|
||||
</Stack>
|
||||
</ScrollArea>
|
||||
|
||||
<form onSubmit={handleSubmit}>
|
||||
<Paper withBorder p="sm" radius="xl" my="md">
|
||||
<Group>
|
||||
<TextInput
|
||||
value={input}
|
||||
onChange={(e) => setInput(e.currentTarget.value)}
|
||||
placeholder="Speak or type your thoughts..."
|
||||
style={{ flex: 1 }}
|
||||
styles={{
|
||||
input: {
|
||||
paddingLeft: '1rem',
|
||||
paddingRight: '0.5rem',
|
||||
},
|
||||
}}
|
||||
variant="unstyled"
|
||||
disabled={status === 'submitted' || status === 'streaming'}
|
||||
/>
|
||||
|
||||
{/* Microphone Recorder */}
|
||||
<MicrophoneRecorder
|
||||
onTranscriptUpdate={(transcript) => {
|
||||
setInput(transcript);
|
||||
}}
|
||||
onTranscriptFinalized={(transcript) => {
|
||||
setInput(transcript);
|
||||
setTimeout(() => {
|
||||
const form = document.querySelector('form');
|
||||
if (form) {
|
||||
form.requestSubmit();
|
||||
}
|
||||
}, 100);
|
||||
}}
|
||||
/>
|
||||
|
||||
{/* Fixed Voice Mode Controls */}
|
||||
<Paper
|
||||
withBorder
|
||||
p="md"
|
||||
radius={0}
|
||||
style={{
|
||||
position: 'fixed',
|
||||
bottom: 0,
|
||||
left: 0,
|
||||
right: 0,
|
||||
zIndex: 50,
|
||||
borderTop: '1px solid #dee2e6',
|
||||
backgroundColor: '#1a1b1e',
|
||||
}}
|
||||
>
|
||||
<Container size="md">
|
||||
<Stack gap="sm">
|
||||
<Group gap="sm">
|
||||
{/* Main Voice Button */}
|
||||
<Button
|
||||
type="submit"
|
||||
onClick={() => send({ type: isVoiceActive ? 'STOP_VOICE' : 'START_VOICE' })}
|
||||
size="xl"
|
||||
radius="xl"
|
||||
loading={status === 'submitted' || status === 'streaming'}
|
||||
h={80}
|
||||
style={{ flex: 1 }}
|
||||
color={
|
||||
canSkipAudio
|
||||
? 'blue'
|
||||
: state.matches('userSpeaking') || state.matches('timingOut')
|
||||
? 'green'
|
||||
: state.matches('listening')
|
||||
? 'yellow'
|
||||
: state.matches('waitingForAI') || state.matches('submittingUser')
|
||||
? 'blue'
|
||||
: 'gray'
|
||||
}
|
||||
variant={isVoiceActive ? 'filled' : 'light'}
|
||||
leftSection={
|
||||
canSkipAudio ? (
|
||||
<IconVolume size={32} />
|
||||
) : state.matches('userSpeaking') ||
|
||||
state.matches('timingOut') ||
|
||||
state.matches('listening') ? (
|
||||
<IconMicrophone size={32} />
|
||||
) : (
|
||||
<IconMicrophone size={32} />
|
||||
)
|
||||
}
|
||||
disabled={status === 'submitted' || status === 'streaming'}
|
||||
>
|
||||
Send
|
||||
{getVoiceButtonText(state)}
|
||||
</Button>
|
||||
|
||||
{/* Skip Button */}
|
||||
{canSkipAudio && (
|
||||
<Button
|
||||
onClick={() => send({ type: 'SKIP_AUDIO' })}
|
||||
size="xl"
|
||||
radius="xl"
|
||||
h={80}
|
||||
color="gray"
|
||||
variant="outline"
|
||||
>
|
||||
Skip
|
||||
</Button>
|
||||
)}
|
||||
</Group>
|
||||
</Paper>
|
||||
</form>
|
||||
|
||||
{/* Development Test Controls */}
|
||||
{process.env.NODE_ENV === 'development' && (
|
||||
<Paper withBorder p="sm" radius="md" style={{ backgroundColor: '#1a1b1e' }}>
|
||||
<Stack gap="xs">
|
||||
<Text size="xs" fw={700} c="dimmed">
|
||||
DEV: State Machine Testing
|
||||
</Text>
|
||||
<Text size="xs" c="dimmed">
|
||||
State: {JSON.stringify(state.value)} | Tags: {Array.from(state.tags).join(', ')}
|
||||
</Text>
|
||||
<Group gap="xs">
|
||||
<Button
|
||||
size="xs"
|
||||
onClick={() => send({ type: 'START_LISTENING' })}
|
||||
disabled={!state.matches('checkingForGreeting')}
|
||||
>
|
||||
Force Listen
|
||||
</Button>
|
||||
<Button
|
||||
size="xs"
|
||||
onClick={() => send({ type: 'USER_STARTED_SPEAKING' })}
|
||||
disabled={!state.matches('listening')}
|
||||
>
|
||||
Simulate Speech
|
||||
</Button>
|
||||
<Button
|
||||
size="xs"
|
||||
onClick={() => send({ type: 'FINALIZED_PHRASE', phrase: 'Test message' })}
|
||||
disabled={!state.matches('userSpeaking') && !state.matches('listening')}
|
||||
>
|
||||
Add Phrase
|
||||
</Button>
|
||||
<Button
|
||||
size="xs"
|
||||
onClick={() => send({ type: 'SILENCE_TIMEOUT' })}
|
||||
disabled={!state.matches('timingOut')}
|
||||
>
|
||||
Trigger Timeout
|
||||
</Button>
|
||||
</Group>
|
||||
</Stack>
|
||||
</Paper>
|
||||
)}
|
||||
|
||||
{/* Text Input */}
|
||||
<form
|
||||
onSubmit={(e) => {
|
||||
e.preventDefault();
|
||||
if (input.trim() && !isVoiceActive) {
|
||||
sendMessage({ text: input });
|
||||
setInput('');
|
||||
}
|
||||
}}
|
||||
>
|
||||
<Group>
|
||||
<TextInput
|
||||
value={input}
|
||||
onChange={(e) => setInput(e.currentTarget.value)}
|
||||
placeholder="Or type your thoughts here..."
|
||||
style={{ flex: 1 }}
|
||||
variant="filled"
|
||||
disabled={isVoiceActive}
|
||||
/>
|
||||
<Button
|
||||
type="submit"
|
||||
radius="xl"
|
||||
loading={status === 'submitted' || status === 'streaming'}
|
||||
disabled={!input.trim() || isVoiceActive}
|
||||
>
|
||||
Send
|
||||
</Button>
|
||||
</Group>
|
||||
</form>
|
||||
|
||||
{/* Error Display */}
|
||||
{error && (
|
||||
<Text size="sm" c="red">
|
||||
Error: {error}
|
||||
</Text>
|
||||
)}
|
||||
</Stack>
|
||||
</Container>
|
||||
</Paper>
|
||||
</Container>
|
||||
);
|
||||
}
|
||||
|
||||
664
app/chat/page.tsx.backup
Normal file
@@ -0,0 +1,664 @@
|
||||
'use client';
|
||||
|
||||
import { useChat } from '@ai-sdk/react';
|
||||
import {
|
||||
Stack,
|
||||
TextInput,
|
||||
Button,
|
||||
Paper,
|
||||
ScrollArea,
|
||||
Title,
|
||||
Container,
|
||||
Group,
|
||||
Text,
|
||||
Loader,
|
||||
ActionIcon,
|
||||
Tooltip,
|
||||
} from '@mantine/core';
|
||||
import { useRef, useState, useEffect, useCallback } from 'react';
|
||||
import { IconVolume, IconMicrophone, IconMicrophoneOff } from '@tabler/icons-react';
|
||||
import { UserMenu } from '@/components/UserMenu';
|
||||
|
||||
// Define the shape of the Deepgram transcript
|
||||
interface DeepgramTranscript {
|
||||
channel: {
|
||||
alternatives: Array<{
|
||||
transcript: string;
|
||||
}>;
|
||||
};
|
||||
is_final: boolean;
|
||||
speech_final: boolean;
|
||||
}
|
||||
|
||||
type VoiceState = 'idle' | 'listening' | 'user-speaking' | 'processing' | 'ai-speaking';
|
||||
|
||||
export default function ChatPage() {
|
||||
const viewport = useRef<HTMLDivElement>(null);
|
||||
const [input, setInput] = useState('');
|
||||
const [voiceState, setVoiceState] = useState<VoiceState>('idle');
|
||||
const [countdown, setCountdown] = useState<number>(3);
|
||||
const [isGeneratingSpeech, setIsGeneratingSpeech] = useState(false);
|
||||
const lastSpokenMessageId = useRef<string | null>(null);
|
||||
const audioRef = useRef<HTMLAudioElement | null>(null);
|
||||
const mediaRecorderRef = useRef<MediaRecorder | null>(null);
|
||||
const socketRef = useRef<WebSocket | null>(null);
|
||||
const transcriptRef = useRef<string>('');
|
||||
const silenceTimeoutRef = useRef<NodeJS.Timeout | null>(null);
|
||||
const silenceStartTimeRef = useRef<number | null>(null);
|
||||
const countdownIntervalRef = useRef<NodeJS.Timeout | null>(null);
|
||||
const hasStartedSpeakingRef = useRef(false);
|
||||
|
||||
const { messages, sendMessage, setMessages, status } = useChat({
|
||||
api: '/api/chat',
|
||||
body: {
|
||||
persona: 'Socratic',
|
||||
},
|
||||
credentials: 'include',
|
||||
});
|
||||
|
||||
// Handle AI response in voice conversation mode
|
||||
useEffect(() => {
|
||||
if (voiceState !== 'processing') return;
|
||||
|
||||
console.log('[Voice Mode] Effect running - voiceState: processing, status:', status, 'messages:', messages.length);
|
||||
|
||||
// Wait until the AI response is complete (status returns to 'ready')
|
||||
if (status !== 'ready') {
|
||||
console.log('[Voice Mode] Waiting for status to be ready, current:', status);
|
||||
return;
|
||||
}
|
||||
|
||||
// Find the latest assistant message
|
||||
console.log('[Voice Mode] All messages:', messages.map(m => ({ role: m.role, id: m.id, preview: m.parts[0]?.text?.substring(0, 30) })));
|
||||
|
||||
const lastAssistantMessage = [...messages]
|
||||
.reverse()
|
||||
.find((m) => m.role === 'assistant');
|
||||
|
||||
if (!lastAssistantMessage) {
|
||||
console.log('[Voice Mode] No assistant message found');
|
||||
return;
|
||||
}
|
||||
|
||||
console.log('[Voice Mode] Selected message ID:', lastAssistantMessage.id);
|
||||
console.log('[Voice Mode] Selected message text preview:', lastAssistantMessage.parts.find(p => p.type === 'text')?.text?.substring(0, 50));
|
||||
console.log('[Voice Mode] Last spoken message ID:', lastSpokenMessageId.current);
|
||||
|
||||
// Skip if we've already spoken this message
|
||||
if (lastSpokenMessageId.current === lastAssistantMessage.id) {
|
||||
console.log('[Voice Mode] Already spoke this message, skipping');
|
||||
return;
|
||||
}
|
||||
|
||||
// Extract text from the message
|
||||
const textPart = lastAssistantMessage.parts.find((p) => p.type === 'text');
|
||||
if (!textPart || !textPart.text) {
|
||||
console.log('[Voice Mode] No text part found in message');
|
||||
return;
|
||||
}
|
||||
|
||||
// Play the audio and transition to ai-speaking state
|
||||
console.log('[Voice Mode] Transitioning to ai-speaking, will play audio');
|
||||
setVoiceState('ai-speaking');
|
||||
playAudio(textPart.text, lastAssistantMessage.id);
|
||||
}, [messages, voiceState, status]);
|
||||
|
||||
const playAudio = async (text: string, messageId: string) => {
|
||||
try {
|
||||
console.log('[Voice Mode] Generating speech for message:', messageId);
|
||||
setIsGeneratingSpeech(true);
|
||||
|
||||
const response = await fetch('/api/tts', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ text }),
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error('Failed to generate speech');
|
||||
}
|
||||
|
||||
const audioBlob = await response.blob();
|
||||
const audioUrl = URL.createObjectURL(audioBlob);
|
||||
|
||||
// Create or reuse audio element
|
||||
if (!audioRef.current) {
|
||||
audioRef.current = new Audio();
|
||||
}
|
||||
|
||||
audioRef.current.src = audioUrl;
|
||||
audioRef.current.onended = () => {
|
||||
URL.revokeObjectURL(audioUrl);
|
||||
console.log('[Voice Mode] ✓ Finished playing audio, starting new listening session');
|
||||
lastSpokenMessageId.current = messageId;
|
||||
setIsGeneratingSpeech(false);
|
||||
|
||||
// After AI finishes speaking, go back to listening for user
|
||||
startListening();
|
||||
};
|
||||
|
||||
audioRef.current.onerror = () => {
|
||||
URL.revokeObjectURL(audioUrl);
|
||||
console.error('[Voice Mode] Error playing audio');
|
||||
setIsGeneratingSpeech(false);
|
||||
// On error, also go back to listening
|
||||
startListening();
|
||||
};
|
||||
|
||||
await audioRef.current.play();
|
||||
console.log('[Voice Mode] ✓ Playing audio');
|
||||
setIsGeneratingSpeech(false); // Audio is now playing
|
||||
} catch (error) {
|
||||
console.error('[Voice Mode] Error:', error);
|
||||
setIsGeneratingSpeech(false);
|
||||
// On error, go back to listening
|
||||
startListening();
|
||||
}
|
||||
};
|
||||
|
||||
const submitUserInput = useCallback(() => {
|
||||
// Clear any pending silence timeout and countdown
|
||||
if (silenceTimeoutRef.current) {
|
||||
clearTimeout(silenceTimeoutRef.current);
|
||||
silenceTimeoutRef.current = null;
|
||||
}
|
||||
if (countdownIntervalRef.current) {
|
||||
clearInterval(countdownIntervalRef.current);
|
||||
countdownIntervalRef.current = null;
|
||||
}
|
||||
silenceStartTimeRef.current = null;
|
||||
setCountdown(3);
|
||||
|
||||
// Stop recording
|
||||
if (mediaRecorderRef.current) {
|
||||
mediaRecorderRef.current.stop();
|
||||
mediaRecorderRef.current = null;
|
||||
}
|
||||
if (socketRef.current) {
|
||||
socketRef.current.close();
|
||||
socketRef.current = null;
|
||||
}
|
||||
|
||||
// Reset speaking flag
|
||||
hasStartedSpeakingRef.current = false;
|
||||
|
||||
// Send the transcript as a message if we have one
|
||||
if (transcriptRef.current.trim()) {
|
||||
console.log('[Voice Mode] Submitting transcript:', transcriptRef.current);
|
||||
setInput(transcriptRef.current);
|
||||
setVoiceState('processing');
|
||||
|
||||
setTimeout(() => {
|
||||
const form = document.querySelector('form');
|
||||
if (form) {
|
||||
console.log('[Voice Mode] Form found, submitting...');
|
||||
form.requestSubmit();
|
||||
} else {
|
||||
console.error('[Voice Mode] Form not found!');
|
||||
}
|
||||
}, 100);
|
||||
} else {
|
||||
// If no transcript, go back to listening
|
||||
console.log('[Voice Mode] No transcript to submit, going back to listening');
|
||||
startListening();
|
||||
}
|
||||
|
||||
transcriptRef.current = '';
|
||||
}, []);
|
||||
|
||||
const startListening = useCallback(async () => {
|
||||
transcriptRef.current = '';
|
||||
setInput('');
|
||||
hasStartedSpeakingRef.current = false;
|
||||
// DON'T reset lastSpokenMessageId here - we need it to track what we've already spoken
|
||||
silenceStartTimeRef.current = null;
|
||||
setCountdown(3);
|
||||
setVoiceState('listening');
|
||||
|
||||
try {
|
||||
// 1. Get the Deepgram API key
|
||||
const response = await fetch('/api/voice-token', { method: 'POST' });
|
||||
const data = await response.json();
|
||||
|
||||
if (data.error) {
|
||||
throw new Error(data.error);
|
||||
}
|
||||
|
||||
const { key } = data;
|
||||
|
||||
// 2. Access the microphone
|
||||
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
|
||||
|
||||
// 3. Open direct WebSocket to Deepgram with voice activity detection
|
||||
const socket = new WebSocket(
|
||||
'wss://api.deepgram.com/v1/listen?interim_results=true&punctuate=true&vad_events=true',
|
||||
['token', key]
|
||||
);
|
||||
socketRef.current = socket;
|
||||
|
||||
socket.onopen = () => {
|
||||
console.log('[Voice Mode] ✓ WebSocket connected, listening for speech...');
|
||||
|
||||
// 4. Create MediaRecorder
|
||||
const mediaRecorder = new MediaRecorder(stream, {
|
||||
mimeType: 'audio/webm',
|
||||
});
|
||||
mediaRecorderRef.current = mediaRecorder;
|
||||
|
||||
// 5. Send audio chunks on data available
|
||||
mediaRecorder.ondataavailable = (event) => {
|
||||
if (event.data.size > 0 && socket.readyState === WebSocket.OPEN) {
|
||||
socket.send(event.data);
|
||||
}
|
||||
};
|
||||
|
||||
// Start recording and chunking audio every 250ms
|
||||
mediaRecorder.start(250);
|
||||
};
|
||||
|
||||
// 6. Receive transcripts and handle silence detection
|
||||
socket.onmessage = (event) => {
|
||||
const data = JSON.parse(event.data) as DeepgramTranscript;
|
||||
|
||||
// Check if this message has alternatives (some Deepgram messages don't)
|
||||
if (!data.channel?.alternatives) {
|
||||
return; // Skip non-transcript messages (metadata, VAD events, etc.)
|
||||
}
|
||||
|
||||
const transcript = data.channel.alternatives[0]?.transcript || '';
|
||||
|
||||
if (transcript) {
|
||||
// User has started speaking
|
||||
if (!hasStartedSpeakingRef.current) {
|
||||
console.log('[Voice Mode] User started speaking');
|
||||
hasStartedSpeakingRef.current = true;
|
||||
setVoiceState('user-speaking');
|
||||
}
|
||||
|
||||
// Clear any existing silence timeout and countdown
|
||||
if (silenceTimeoutRef.current) {
|
||||
clearTimeout(silenceTimeoutRef.current);
|
||||
silenceTimeoutRef.current = null;
|
||||
}
|
||||
if (countdownIntervalRef.current) {
|
||||
clearInterval(countdownIntervalRef.current);
|
||||
countdownIntervalRef.current = null;
|
||||
}
|
||||
silenceStartTimeRef.current = null;
|
||||
setCountdown(3);
|
||||
|
||||
// Handle transcript updates
|
||||
if (data.is_final) {
|
||||
// This is a finalized phrase - append it to our transcript
|
||||
transcriptRef.current = transcriptRef.current
|
||||
? transcriptRef.current + ' ' + transcript
|
||||
: transcript;
|
||||
setInput(transcriptRef.current);
|
||||
console.log('[Voice Mode] Finalized phrase:', transcript);
|
||||
|
||||
// Start a generous 3-second silence timer after each finalized phrase
|
||||
silenceStartTimeRef.current = Date.now();
|
||||
|
||||
// Update countdown every 100ms
|
||||
countdownIntervalRef.current = setInterval(() => {
|
||||
if (silenceStartTimeRef.current) {
|
||||
const elapsed = Date.now() - silenceStartTimeRef.current;
|
||||
const remaining = Math.max(0, 3 - elapsed / 1000);
|
||||
setCountdown(remaining);
|
||||
}
|
||||
}, 100);
|
||||
|
||||
silenceTimeoutRef.current = setTimeout(() => {
|
||||
console.log('[Voice Mode] 3 seconds of silence detected, submitting...');
|
||||
submitUserInput();
|
||||
}, 3000);
|
||||
} else {
|
||||
// This is an interim result - show it temporarily
|
||||
const displayText = transcriptRef.current
|
||||
? transcriptRef.current + ' ' + transcript
|
||||
: transcript;
|
||||
setInput(displayText);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
socket.onclose = () => {
|
||||
// Clean up stream
|
||||
stream.getTracks().forEach((track) => track.stop());
|
||||
console.log('[Voice Mode] WebSocket closed');
|
||||
};
|
||||
|
||||
socket.onerror = (err) => {
|
||||
console.error('[Voice Mode] WebSocket error:', err);
|
||||
setVoiceState('idle');
|
||||
};
|
||||
} catch (error) {
|
||||
console.error('[Voice Mode] Error starting listening:', error);
|
||||
setVoiceState('idle');
|
||||
}
|
||||
}, [submitUserInput]);
|
||||
|
||||
const skipAudioAndListen = useCallback(() => {
|
||||
console.log('[Voice Mode] Skipping audio playback');
|
||||
|
||||
// Stop current audio
|
||||
if (audioRef.current) {
|
||||
audioRef.current.pause();
|
||||
audioRef.current.currentTime = 0;
|
||||
}
|
||||
|
||||
setIsGeneratingSpeech(false);
|
||||
|
||||
// Go straight to listening
|
||||
startListening();
|
||||
}, [startListening]);
|
||||
|
||||
const exitVoiceMode = useCallback(() => {
|
||||
// Clear any timeouts and intervals
|
||||
if (silenceTimeoutRef.current) {
|
||||
clearTimeout(silenceTimeoutRef.current);
|
||||
silenceTimeoutRef.current = null;
|
||||
}
|
||||
if (countdownIntervalRef.current) {
|
||||
clearInterval(countdownIntervalRef.current);
|
||||
countdownIntervalRef.current = null;
|
||||
}
|
||||
silenceStartTimeRef.current = null;
|
||||
|
||||
// Stop recording
|
||||
if (mediaRecorderRef.current) {
|
||||
mediaRecorderRef.current.stop();
|
||||
mediaRecorderRef.current = null;
|
||||
}
|
||||
if (socketRef.current) {
|
||||
socketRef.current.close();
|
||||
socketRef.current = null;
|
||||
}
|
||||
|
||||
// Stop audio playback
|
||||
if (audioRef.current) {
|
||||
audioRef.current.pause();
|
||||
audioRef.current = null;
|
||||
}
|
||||
|
||||
hasStartedSpeakingRef.current = false;
|
||||
lastSpokenMessageId.current = null;
|
||||
transcriptRef.current = '';
|
||||
setInput('');
|
||||
setCountdown(3);
|
||||
setIsGeneratingSpeech(false);
|
||||
setVoiceState('idle');
|
||||
console.log('[Voice Mode] Exited voice conversation mode');
|
||||
}, []);
|
||||
|
||||
const handleToggleVoiceMode = useCallback(() => {
|
||||
if (voiceState === 'idle') {
|
||||
// Start voice conversation mode
|
||||
// First, check if there's a recent AI message to read out
|
||||
const lastAssistantMessage = [...messages]
|
||||
.reverse()
|
||||
.find((m) => m.role === 'assistant');
|
||||
|
||||
if (lastAssistantMessage) {
|
||||
// Extract text from the message
|
||||
const textPart = lastAssistantMessage.parts.find((p) => p.type === 'text');
|
||||
|
||||
if (textPart && textPart.text) {
|
||||
// Play the most recent AI message first, then start listening
|
||||
console.log('[Voice Mode] Starting voice mode, reading most recent AI message first');
|
||||
setVoiceState('ai-speaking');
|
||||
playAudio(textPart.text, lastAssistantMessage.id);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// No AI message to read, just start listening
|
||||
startListening();
|
||||
} else {
|
||||
// Exit voice conversation mode
|
||||
exitVoiceMode();
|
||||
}
|
||||
}, [voiceState, startListening, exitVoiceMode, messages]);
|
||||
|
||||
// Add initial greeting message on first load
|
||||
useEffect(() => {
|
||||
if (messages.length === 0) {
|
||||
setMessages([
|
||||
{
|
||||
id: 'initial-greeting',
|
||||
role: 'assistant',
|
||||
parts: [
|
||||
{
|
||||
type: 'text',
|
||||
text: 'Welcome to Ponderants! I\'m here to help you explore and structure your ideas through conversation.\n\nWhat would you like to talk about today? I can adapt my interview style to best suit your needs (Socratic questioning, collaborative brainstorming, or other approaches).\n\nJust start sharing your thoughts, and we\'ll discover meaningful insights together.',
|
||||
},
|
||||
],
|
||||
},
|
||||
]);
|
||||
}
|
||||
}, []);
|
||||
|
||||
// Auto-scroll to bottom
|
||||
useEffect(() => {
|
||||
viewport.current?.scrollTo({
|
||||
top: viewport.current.scrollHeight,
|
||||
behavior: 'smooth',
|
||||
});
|
||||
}, [messages]);
|
||||
|
||||
const handleSubmit = (e: React.FormEvent) => {
|
||||
e.preventDefault();
|
||||
if (!input.trim() || status === 'submitted' || status === 'streaming') return;
|
||||
|
||||
sendMessage({ text: input });
|
||||
setInput('');
|
||||
};
|
||||
|
||||
const handleNewConversation = () => {
|
||||
// Clear all messages and reset to initial greeting
|
||||
setMessages([
|
||||
{
|
||||
id: 'initial-greeting',
|
||||
role: 'assistant',
|
||||
parts: [
|
||||
{
|
||||
type: 'text',
|
||||
text: 'Welcome to Ponderants! I\'m here to help you explore and structure your ideas through conversation.\n\nWhat would you like to talk about today? I can adapt my interview style to best suit your needs (Socratic questioning, collaborative brainstorming, or other approaches).\n\nJust start sharing your thoughts, and we\'ll discover meaningful insights together.',
|
||||
},
|
||||
],
|
||||
},
|
||||
]);
|
||||
};
|
||||
|
||||
return (
|
||||
<Container size="md" h="100vh" style={{ display: 'flex', flexDirection: 'column' }}>
|
||||
<Group justify="space-between" py="md">
|
||||
<Title order={2}>
|
||||
Ponderants Interview
|
||||
</Title>
|
||||
<Group gap="md">
|
||||
<Tooltip label="Start a new conversation">
|
||||
<Button
|
||||
variant="subtle"
|
||||
onClick={handleNewConversation}
|
||||
disabled={status === 'submitted' || status === 'streaming'}
|
||||
>
|
||||
New Conversation
|
||||
</Button>
|
||||
</Tooltip>
|
||||
<UserMenu />
|
||||
</Group>
|
||||
</Group>
|
||||
|
||||
<ScrollArea
|
||||
h="100%"
|
||||
style={{ flex: 1 }}
|
||||
viewportRef={viewport}
|
||||
>
|
||||
<Stack gap="md" pb="xl">
|
||||
{messages.map((m) => (
|
||||
<Paper
|
||||
key={m.id}
|
||||
withBorder
|
||||
shadow="md"
|
||||
p="sm"
|
||||
radius="lg"
|
||||
style={{
|
||||
alignSelf: m.role === 'user' ? 'flex-end' : 'flex-start',
|
||||
backgroundColor:
|
||||
m.role === 'user' ? '#343a40' : '#212529',
|
||||
}}
|
||||
w="80%"
|
||||
>
|
||||
<Text fw={700} size="sm">{m.role === 'user' ? 'You' : 'AI'}</Text>
|
||||
{m.parts.map((part, i) => {
|
||||
if (part.type === 'text') {
|
||||
return (
|
||||
<Text key={i} style={{ whiteSpace: 'pre-wrap' }}>
|
||||
{part.text}
|
||||
</Text>
|
||||
);
|
||||
}
|
||||
|
||||
// Handle tool calls (e.g., suggest_node)
|
||||
if (part.type === 'tool-call') {
|
||||
return (
|
||||
<Paper key={i} withBorder p="xs" mt="xs" bg="dark.6">
|
||||
<Text size="xs" c="dimmed" mb="xs">
|
||||
💡 Node Suggestion
|
||||
</Text>
|
||||
<Text fw={600}>{part.args.title}</Text>
|
||||
<Text size="sm" mt="xs">
|
||||
{part.args.content}
|
||||
</Text>
|
||||
{part.args.tags && part.args.tags.length > 0 && (
|
||||
<Group gap="xs" mt="xs">
|
||||
{part.args.tags.map((tag: string, tagIdx: number) => (
|
||||
<Text key={tagIdx} size="xs" c="blue.4">
|
||||
#{tag}
|
||||
</Text>
|
||||
))}
|
||||
</Group>
|
||||
)}
|
||||
</Paper>
|
||||
);
|
||||
}
|
||||
|
||||
return null;
|
||||
})}
|
||||
</Paper>
|
||||
))}
|
||||
|
||||
{/* Typing indicator while AI is generating a response */}
|
||||
{(status === 'submitted' || status === 'streaming') && (
|
||||
<Paper
|
||||
withBorder
|
||||
shadow="md"
|
||||
p="sm"
|
||||
radius="lg"
|
||||
style={{
|
||||
alignSelf: 'flex-start',
|
||||
backgroundColor: '#212529',
|
||||
}}
|
||||
w="80%"
|
||||
>
|
||||
<Text fw={700} size="sm">AI</Text>
|
||||
<Group gap="xs" mt="xs">
|
||||
<Loader size="xs" />
|
||||
<Text size="sm" c="dimmed">Thinking...</Text>
|
||||
</Group>
|
||||
</Paper>
|
||||
)}
|
||||
|
||||
</Stack>
|
||||
</ScrollArea>
|
||||
|
||||
{/* Big Voice Mode Button - shown above text input */}
|
||||
<Paper withBorder p="md" radius="xl" my="md">
|
||||
<Stack gap="sm">
|
||||
<Group gap="sm">
|
||||
<Button
|
||||
onClick={handleToggleVoiceMode}
|
||||
size="xl"
|
||||
radius="xl"
|
||||
h={80}
|
||||
style={{ flex: 1 }}
|
||||
color={
|
||||
voiceState === 'ai-speaking'
|
||||
? 'blue'
|
||||
: voiceState === 'user-speaking'
|
||||
? 'green'
|
||||
: voiceState === 'listening'
|
||||
? 'yellow'
|
||||
: voiceState === 'processing'
|
||||
? 'blue'
|
||||
: 'gray'
|
||||
}
|
||||
variant={voiceState !== 'idle' ? 'filled' : 'light'}
|
||||
leftSection={
|
||||
voiceState === 'ai-speaking' ? (
|
||||
<IconVolume size={32} />
|
||||
) : voiceState === 'user-speaking' || voiceState === 'listening' ? (
|
||||
<IconMicrophone size={32} />
|
||||
) : (
|
||||
<IconMicrophone size={32} />
|
||||
)
|
||||
}
|
||||
disabled={status === 'submitted' || status === 'streaming'}
|
||||
>
|
||||
{voiceState === 'idle'
|
||||
? 'Start Voice Conversation'
|
||||
: voiceState === 'listening'
|
||||
? 'Listening... Start speaking'
|
||||
: voiceState === 'user-speaking'
|
||||
? silenceStartTimeRef.current
|
||||
? `Speaking... (auto-submits in ${countdown.toFixed(1)}s)`
|
||||
: 'Speaking... (will auto-submit after 3s silence)'
|
||||
: voiceState === 'processing'
|
||||
? 'Processing...'
|
||||
: isGeneratingSpeech
|
||||
? 'Generating speech...'
|
||||
: 'AI is speaking... Please wait'}
|
||||
</Button>
|
||||
|
||||
{/* Skip button - only shown when AI is speaking */}
|
||||
{voiceState === 'ai-speaking' && (
|
||||
<Button
|
||||
onClick={skipAudioAndListen}
|
||||
size="xl"
|
||||
radius="xl"
|
||||
h={80}
|
||||
color="gray"
|
||||
variant="outline"
|
||||
>
|
||||
Skip
|
||||
</Button>
|
||||
)}
|
||||
</Group>
|
||||
|
||||
{/* Text Input - always available */}
|
||||
<form onSubmit={handleSubmit}>
|
||||
<Group>
|
||||
<TextInput
|
||||
value={input}
|
||||
onChange={(e) => setInput(e.currentTarget.value)}
|
||||
placeholder="Or type your thoughts here..."
|
||||
style={{ flex: 1 }}
|
||||
variant="filled"
|
||||
disabled={voiceState !== 'idle'}
|
||||
/>
|
||||
<Button
|
||||
type="submit"
|
||||
radius="xl"
|
||||
loading={status === 'submitted' || status === 'streaming'}
|
||||
disabled={!input.trim() || voiceState !== 'idle'}
|
||||
>
|
||||
Send
|
||||
</Button>
|
||||
</Group>
|
||||
</form>
|
||||
</Stack>
|
||||
</Paper>
|
||||
</Container>
|
||||
);
|
||||
}
|
||||
814
app/chat/page.tsx.old
Normal file
@@ -0,0 +1,814 @@
|
||||
'use client';
|
||||
|
||||
import { useChat } from '@ai-sdk/react';
|
||||
import {
|
||||
Stack,
|
||||
TextInput,
|
||||
Button,
|
||||
Paper,
|
||||
ScrollArea,
|
||||
Title,
|
||||
Container,
|
||||
Group,
|
||||
Text,
|
||||
Loader,
|
||||
ActionIcon,
|
||||
Tooltip,
|
||||
} from '@mantine/core';
|
||||
import { useRef, useState, useEffect, useCallback } from 'react';
|
||||
import { IconVolume, IconMicrophone, IconMicrophoneOff } from '@tabler/icons-react';
|
||||
import { createActor } from 'xstate';
|
||||
import { useSelector } from '@xstate/react';
|
||||
import { appMachine } from '@/lib/app-machine';
|
||||
import { UserMenu } from '@/components/UserMenu';
|
||||
|
||||
// Define the shape of the Deepgram transcript
|
||||
interface DeepgramTranscript {
|
||||
channel: {
|
||||
alternatives: Array<{
|
||||
transcript: string;
|
||||
}>;
|
||||
};
|
||||
is_final: boolean;
|
||||
speech_final: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the voice button text based on the current state tags.
|
||||
* This replaces complex nested ternaries with a clean, readable function.
|
||||
*/
|
||||
function getVoiceButtonText(
|
||||
state: ReturnType<typeof useSelector<typeof actorRef, any>>,
|
||||
silenceStartTime: number | null
|
||||
): string {
|
||||
// Check tags in priority order and return appropriate text
|
||||
let buttonText: string;
|
||||
|
||||
if (state.hasTag('textMode') || state.hasTag('voiceIdle')) {
|
||||
buttonText = 'Start Voice Conversation';
|
||||
} else if (state.hasTag('listening')) {
|
||||
buttonText = 'Listening... Start speaking';
|
||||
} else if (state.hasTag('userSpeaking')) {
|
||||
buttonText = 'Speaking... (will auto-submit after 3s silence)';
|
||||
} else if (state.hasTag('timingOut')) {
|
||||
if (silenceStartTime) {
|
||||
const elapsed = Date.now() - silenceStartTime;
|
||||
const remaining = Math.max(0, 3 - elapsed / 1000);
|
||||
buttonText = `Speaking... (auto-submits in ${remaining.toFixed(1)}s)`;
|
||||
} else {
|
||||
buttonText = 'Speaking... (timing out...)';
|
||||
}
|
||||
} else if (state.hasTag('processing')) {
|
||||
buttonText = 'Processing...';
|
||||
} else if (state.hasTag('aiGenerating')) {
|
||||
buttonText = 'Generating speech...';
|
||||
} else if (state.hasTag('aiSpeaking')) {
|
||||
buttonText = 'AI is speaking... Please wait';
|
||||
} else {
|
||||
// Fallback (should never reach here if tags are properly defined)
|
||||
buttonText = 'Start Voice Conversation';
|
||||
console.warn('[Voice Mode] No matching tag found, using fallback text. Active tags:', state.tags);
|
||||
}
|
||||
|
||||
console.log('[Voice Mode] Button text determined:', buttonText, 'Active tags:', Array.from(state.tags));
|
||||
return buttonText;
|
||||
}
|
||||
|
||||
export default function ChatPage() {
|
||||
const viewport = useRef<HTMLDivElement>(null);
|
||||
|
||||
// XState machine for voice mode state management
|
||||
const [actorRef] = useState(() => createActor(appMachine).start());
|
||||
const state = useSelector(actorRef, (snapshot) => snapshot);
|
||||
const send = actorRef.send.bind(actorRef);
|
||||
|
||||
// Imperative refs for managing side effects
|
||||
const audioRef = useRef<HTMLAudioElement | null>(null);
|
||||
const mediaRecorderRef = useRef<MediaRecorder | null>(null);
|
||||
const socketRef = useRef<WebSocket | null>(null);
|
||||
const silenceTimeoutRef = useRef<NodeJS.Timeout | null>(null);
|
||||
const silenceStartTimeRef = useRef<number | null>(null);
|
||||
const countdownIntervalRef = useRef<NodeJS.Timeout | null>(null);
|
||||
const shouldCancelAudioRef = useRef<boolean>(false); // Flag to cancel pending audio operations
|
||||
|
||||
const { messages, sendMessage, setMessages, status } = useChat();
|
||||
|
||||
// Extract text from message (handles v5 parts structure)
|
||||
const getMessageText = (msg: any): string => {
|
||||
if ('parts' in msg && Array.isArray(msg.parts)) {
|
||||
const textPart = msg.parts.find((p: any) => p.type === 'text');
|
||||
return textPart?.text || '';
|
||||
}
|
||||
return msg.content || '';
|
||||
};
|
||||
|
||||
// Handle AI response in voice conversation mode - SIMPLE VERSION
|
||||
useEffect(() => {
|
||||
if (!state.hasTag('processing')) return;
|
||||
if (status !== 'ready') {
|
||||
console.log('[Voice Mode] Waiting, status:', status);
|
||||
return;
|
||||
}
|
||||
|
||||
const transcript = state.context.transcript?.trim();
|
||||
if (!transcript) return;
|
||||
|
||||
console.log('[Voice Mode] === PROCESSING ===');
|
||||
console.log('[Voice Mode] Transcript:', transcript);
|
||||
console.log('[Voice Mode] Messages:', messages.length);
|
||||
|
||||
// Get last 2 messages
|
||||
const lastMsg = messages[messages.length - 1];
|
||||
const secondLastMsg = messages[messages.length - 2];
|
||||
|
||||
console.log('[Voice Mode] Last msg:', lastMsg?.role, getMessageText(lastMsg || {}).substring(0, 30));
|
||||
console.log('[Voice Mode] 2nd last msg:', secondLastMsg?.role, getMessageText(secondLastMsg || {}).substring(0, 30));
|
||||
|
||||
// Case 1: User message not submitted yet
|
||||
// Check if the last message is the user's transcript
|
||||
const userMessageExists = messages.some(m =>
|
||||
m.role === 'user' && getMessageText(m) === transcript
|
||||
);
|
||||
|
||||
if (!userMessageExists) {
|
||||
console.log('[Voice Mode] → Submitting user message');
|
||||
submitUserInput();
|
||||
return;
|
||||
}
|
||||
|
||||
// Case 2: User message submitted, check if AI has responded
|
||||
// After user submits, if AI responds, the new AI message is LAST
|
||||
if (lastMsg && lastMsg.role === 'assistant' &&
|
||||
secondLastMsg && secondLastMsg.role === 'user' &&
|
||||
getMessageText(secondLastMsg) === transcript) {
|
||||
|
||||
const aiMsg = lastMsg;
|
||||
console.log('[Voice Mode] → AI response found:', aiMsg.id);
|
||||
console.log('[Voice Mode] → Last spoken:', state.context.lastSpokenMessageId);
|
||||
|
||||
// Only play if we haven't played this message yet
|
||||
if (state.context.lastSpokenMessageId !== aiMsg.id) {
|
||||
const text = getMessageText(aiMsg);
|
||||
console.log('[Voice Mode] → Playing:', text.substring(0, 50) + '...');
|
||||
send({ type: 'AI_RESPONSE_READY', messageId: aiMsg.id, text });
|
||||
playAudio(text, aiMsg.id);
|
||||
} else {
|
||||
console.log('[Voice Mode] → Already played, skipping');
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
// Case 3: Waiting for AI response
|
||||
console.log('[Voice Mode] → Waiting for AI response...');
|
||||
}, [messages, state, status, send]);
|
||||
|
||||
|
||||
// Stop all audio playback and cancel pending operations
|
||||
const stopAllAudio = useCallback(() => {
|
||||
console.log('[Voice Mode] Stopping all audio operations');
|
||||
|
||||
// Set cancel flag to prevent any pending audio from playing
|
||||
shouldCancelAudioRef.current = true;
|
||||
|
||||
// Stop and clean up audio element
|
||||
if (audioRef.current) {
|
||||
audioRef.current.pause();
|
||||
audioRef.current.currentTime = 0;
|
||||
audioRef.current.src = '';
|
||||
}
|
||||
}, []);
|
||||
|
||||
const playAudio = async (text: string, messageId: string) => {
|
||||
try {
|
||||
// Reset cancel flag at the start of a new audio operation
|
||||
shouldCancelAudioRef.current = false;
|
||||
|
||||
console.log('[Voice Mode] Generating speech for message:', messageId);
|
||||
console.log('[Voice Mode] State transition:', state.value);
|
||||
|
||||
const response = await fetch('/api/tts', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ text }),
|
||||
});
|
||||
|
||||
// Check if we should cancel before continuing
|
||||
if (shouldCancelAudioRef.current) {
|
||||
console.log('[Voice Mode] Audio generation canceled before blob creation');
|
||||
return;
|
||||
}
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error('Failed to generate speech');
|
||||
}
|
||||
|
||||
const audioBlob = await response.blob();
|
||||
|
||||
// Check again after async operation
|
||||
if (shouldCancelAudioRef.current) {
|
||||
console.log('[Voice Mode] Audio generation canceled after blob creation');
|
||||
return;
|
||||
}
|
||||
|
||||
const audioUrl = URL.createObjectURL(audioBlob);
|
||||
|
||||
// Create or reuse audio element
|
||||
if (!audioRef.current) {
|
||||
audioRef.current = new Audio();
|
||||
}
|
||||
|
||||
audioRef.current.src = audioUrl;
|
||||
audioRef.current.onended = () => {
|
||||
URL.revokeObjectURL(audioUrl);
|
||||
console.log('[Voice Mode] ✓ Finished playing audio, sending TTS_FINISHED event');
|
||||
console.log('[Voice Mode] State transition:', state.value);
|
||||
send({ type: 'TTS_FINISHED', messageId });
|
||||
|
||||
// After AI finishes speaking, go back to listening for user
|
||||
startListening();
|
||||
};
|
||||
|
||||
audioRef.current.onerror = () => {
|
||||
URL.revokeObjectURL(audioUrl);
|
||||
console.error('[Voice Mode] Error playing audio');
|
||||
// On error, also go back to listening
|
||||
startListening();
|
||||
};
|
||||
|
||||
// Final check before playing
|
||||
if (shouldCancelAudioRef.current) {
|
||||
console.log('[Voice Mode] Audio playback canceled before play()');
|
||||
URL.revokeObjectURL(audioUrl);
|
||||
return;
|
||||
}
|
||||
|
||||
await audioRef.current.play();
|
||||
|
||||
// Only send TTS_PLAYING if we haven't been canceled
|
||||
if (!shouldCancelAudioRef.current) {
|
||||
console.log('[Voice Mode] ✓ Playing audio, sending TTS_PLAYING event');
|
||||
console.log('[Voice Mode] State transition:', state.value);
|
||||
send({ type: 'TTS_PLAYING' });
|
||||
} else {
|
||||
console.log('[Voice Mode] Audio playback canceled after play()');
|
||||
URL.revokeObjectURL(audioUrl);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('[Voice Mode] Error:', error);
|
||||
// On error, go back to listening
|
||||
startListening();
|
||||
}
|
||||
};
|
||||
|
||||
const submitUserInput = useCallback(() => {
|
||||
// Clear any pending silence timeout and countdown
|
||||
if (silenceTimeoutRef.current) {
|
||||
clearTimeout(silenceTimeoutRef.current);
|
||||
silenceTimeoutRef.current = null;
|
||||
}
|
||||
if (countdownIntervalRef.current) {
|
||||
clearInterval(countdownIntervalRef.current);
|
||||
countdownIntervalRef.current = null;
|
||||
}
|
||||
silenceStartTimeRef.current = null;
|
||||
|
||||
// Stop recording
|
||||
if (mediaRecorderRef.current) {
|
||||
mediaRecorderRef.current.stop();
|
||||
mediaRecorderRef.current = null;
|
||||
}
|
||||
if (socketRef.current) {
|
||||
socketRef.current.close();
|
||||
socketRef.current = null;
|
||||
}
|
||||
|
||||
// Send the transcript as a message if we have one
|
||||
const transcript = state.context.transcript;
|
||||
if (transcript.trim()) {
|
||||
console.log('[Voice Mode] Submitting transcript:', transcript);
|
||||
console.log('[Voice Mode] State transition:', state.value);
|
||||
|
||||
setTimeout(() => {
|
||||
const form = document.querySelector('form');
|
||||
if (form) {
|
||||
console.log('[Voice Mode] Form found, submitting...');
|
||||
form.requestSubmit();
|
||||
} else {
|
||||
console.error('[Voice Mode] Form not found!');
|
||||
}
|
||||
}, 100);
|
||||
} else {
|
||||
// If no transcript, go back to listening
|
||||
console.log('[Voice Mode] No transcript to submit, going back to listening');
|
||||
startListening();
|
||||
}
|
||||
}, [state, send]);
|
||||
|
||||
const startListening = useCallback(async () => {
|
||||
silenceStartTimeRef.current = null;
|
||||
|
||||
// Send event to enter listening state (which clears transcript/input/countdown)
|
||||
console.log('[Voice Mode] Sending START_LISTENING event (implicitly via state transition)');
|
||||
console.log('[Voice Mode] State transition:', state.value);
|
||||
|
||||
try {
|
||||
// 1. Get the Deepgram API key
|
||||
const response = await fetch('/api/voice-token', { method: 'POST' });
|
||||
const data = await response.json();
|
||||
|
||||
if (data.error) {
|
||||
throw new Error(data.error);
|
||||
}
|
||||
|
||||
const { key } = data;
|
||||
|
||||
// 2. Access the microphone
|
||||
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
|
||||
|
||||
// 3. Open direct WebSocket to Deepgram with voice activity detection
|
||||
const socket = new WebSocket(
|
||||
'wss://api.deepgram.com/v1/listen?interim_results=true&punctuate=true&vad_events=true',
|
||||
['token', key]
|
||||
);
|
||||
socketRef.current = socket;
|
||||
|
||||
socket.onopen = () => {
|
||||
console.log('[Voice Mode] ✓ WebSocket connected, listening for speech...');
|
||||
console.log('[Voice Mode] State transition:', state.value);
|
||||
|
||||
// 4. Create MediaRecorder
|
||||
const mediaRecorder = new MediaRecorder(stream, {
|
||||
mimeType: 'audio/webm',
|
||||
});
|
||||
mediaRecorderRef.current = mediaRecorder;
|
||||
|
||||
// 5. Send audio chunks on data available
|
||||
mediaRecorder.ondataavailable = (event) => {
|
||||
if (event.data.size > 0 && socket.readyState === WebSocket.OPEN) {
|
||||
socket.send(event.data);
|
||||
}
|
||||
};
|
||||
|
||||
// Start recording and chunking audio every 250ms
|
||||
mediaRecorder.start(250);
|
||||
};
|
||||
|
||||
// 6. Receive transcripts and handle silence detection
|
||||
socket.onmessage = (event) => {
|
||||
const data = JSON.parse(event.data) as DeepgramTranscript;
|
||||
|
||||
// Check if this message has alternatives (some Deepgram messages don't)
|
||||
if (!data.channel?.alternatives) {
|
||||
return; // Skip non-transcript messages (metadata, VAD events, etc.)
|
||||
}
|
||||
|
||||
const transcript = data.channel.alternatives[0]?.transcript || '';
|
||||
|
||||
if (transcript) {
|
||||
// User has started speaking
|
||||
if (!state.context.hasStartedSpeaking) {
|
||||
console.log('[Voice Mode] User started speaking, sending USER_STARTED_SPEAKING event');
|
||||
console.log('[Voice Mode] State transition:', state.value);
|
||||
send({ type: 'USER_STARTED_SPEAKING' });
|
||||
}
|
||||
|
||||
// Clear any existing silence timeout and countdown
|
||||
if (silenceTimeoutRef.current) {
|
||||
clearTimeout(silenceTimeoutRef.current);
|
||||
silenceTimeoutRef.current = null;
|
||||
}
|
||||
if (countdownIntervalRef.current) {
|
||||
clearInterval(countdownIntervalRef.current);
|
||||
countdownIntervalRef.current = null;
|
||||
}
|
||||
silenceStartTimeRef.current = null;
|
||||
|
||||
// Handle transcript updates
|
||||
if (data.is_final) {
|
||||
// This is a finalized phrase - send to machine
|
||||
console.log('[Voice Mode] === FINALIZED PHRASE ===');
|
||||
console.log('[Voice Mode] Transcript:', transcript);
|
||||
console.log('[Voice Mode] state.value BEFORE:', JSON.stringify(state.value));
|
||||
console.log('[Voice Mode] tags BEFORE:', Array.from(state.tags));
|
||||
console.log('[Voice Mode] context BEFORE:', JSON.stringify(state.context));
|
||||
console.log('[Voice Mode] Sending FINALIZED_PHRASE event');
|
||||
send({ type: 'FINALIZED_PHRASE', phrase: transcript });
|
||||
|
||||
// Start a generous 3-second silence timer after each finalized phrase
|
||||
silenceStartTimeRef.current = Date.now();
|
||||
|
||||
// Update countdown every 100ms
|
||||
countdownIntervalRef.current = setInterval(() => {
|
||||
if (silenceStartTimeRef.current) {
|
||||
const elapsed = Date.now() - silenceStartTimeRef.current;
|
||||
const remaining = Math.max(0, 3 - elapsed / 1000);
|
||||
// Note: countdown is now managed in machine context, but we need
|
||||
// to update it frequently for UI display. This is acceptable as
|
||||
// a UI-only side effect.
|
||||
}
|
||||
}, 100);
|
||||
|
||||
silenceTimeoutRef.current = setTimeout(() => {
|
||||
console.log('[Voice Mode] 3 seconds of silence detected, sending SILENCE_TIMEOUT event');
|
||||
console.log('[Voice Mode] State transition:', state.value);
|
||||
send({ type: 'SILENCE_TIMEOUT' });
|
||||
// Note: submitUserInput will be called by the processing state effect
|
||||
}, 3000);
|
||||
} else {
|
||||
// This is an interim result - update display (send TRANSCRIPT_UPDATE)
|
||||
const currentTranscript = state.context.transcript;
|
||||
const displayText = currentTranscript
|
||||
? currentTranscript + ' ' + transcript
|
||||
: transcript;
|
||||
send({ type: 'TRANSCRIPT_UPDATE', transcript: displayText });
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
socket.onclose = () => {
|
||||
// Clean up stream
|
||||
stream.getTracks().forEach((track) => track.stop());
|
||||
console.log('[Voice Mode] WebSocket closed');
|
||||
console.log('[Voice Mode] State transition:', state.value);
|
||||
};
|
||||
|
||||
socket.onerror = (err) => {
|
||||
console.error('[Voice Mode] WebSocket error:', err);
|
||||
console.log('[Voice Mode] State transition:', state.value);
|
||||
// On error, toggle back to text mode if we're in voice mode
|
||||
if (!state.hasTag('textMode')) {
|
||||
send({ type: 'TOGGLE_VOICE_MODE' });
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
console.error('[Voice Mode] Error starting listening:', error);
|
||||
console.log('[Voice Mode] State transition:', state.value);
|
||||
// On error, toggle back to text mode if we're in voice mode
|
||||
if (!state.hasTag('textMode')) {
|
||||
send({ type: 'TOGGLE_VOICE_MODE' });
|
||||
}
|
||||
}
|
||||
}, [submitUserInput, state, send]);
|
||||
|
||||
const skipAudioAndListen = useCallback(() => {
|
||||
console.log('[Voice Mode] === SKIP BUTTON CLICKED ===');
|
||||
console.log('[Voice Mode] Current state.value:', JSON.stringify(state.value));
|
||||
console.log('[Voice Mode] Current tags:', Array.from(state.tags));
|
||||
|
||||
// Stop ALL audio operations
|
||||
stopAllAudio();
|
||||
|
||||
// Send skip event
|
||||
send({ type: 'SKIP_AUDIO' });
|
||||
|
||||
// Go straight to listening
|
||||
startListening();
|
||||
}, [startListening, state, send, stopAllAudio]);
|
||||
|
||||
const handleToggleVoiceMode = useCallback(() => {
|
||||
console.log('[Voice Mode] Voice button pressed, sending TOGGLE_VOICE_MODE event');
|
||||
console.log('[Voice Mode] Current state:', state.value);
|
||||
send({ type: 'TOGGLE_VOICE_MODE' });
|
||||
}, [state, send]);
|
||||
|
||||
// Handle entering voice.idle state (after TOGGLE_VOICE_MODE from text mode)
|
||||
useEffect(() => {
|
||||
if (!state.hasTag('voiceIdle')) return;
|
||||
|
||||
console.log('[Voice Mode] Entered voice.idle, checking for AI message to read');
|
||||
|
||||
// Get ALL assistant messages in order
|
||||
const assistantMessages = messages.filter((m) => m.role === 'assistant');
|
||||
console.log('[Voice Mode] (idle) Found', assistantMessages.length, 'assistant messages');
|
||||
|
||||
if (assistantMessages.length === 0) {
|
||||
console.log('[Voice Mode] (idle) No assistant messages, starting listening');
|
||||
send({ type: 'START_LISTENING' });
|
||||
startListening();
|
||||
return;
|
||||
}
|
||||
|
||||
// Get the LAST (most recent) assistant message
|
||||
const latestAssistantMessage = assistantMessages[assistantMessages.length - 1];
|
||||
console.log('[Voice Mode] (idle) Latest message ID:', latestAssistantMessage.id);
|
||||
console.log('[Voice Mode] (idle) Last spoken message ID:', state.context.lastSpokenMessageId);
|
||||
|
||||
// Skip if we've already spoken this message
|
||||
if (state.context.lastSpokenMessageId === latestAssistantMessage.id) {
|
||||
console.log('[Voice Mode] (idle) Already spoke latest message, starting listening');
|
||||
send({ type: 'START_LISTENING' });
|
||||
startListening();
|
||||
return;
|
||||
}
|
||||
|
||||
// Extract text from the message
|
||||
let text = '';
|
||||
if ('parts' in latestAssistantMessage && Array.isArray((latestAssistantMessage as any).parts)) {
|
||||
const textPart = (latestAssistantMessage as any).parts.find((p: any) => p.type === 'text');
|
||||
text = textPart?.text || '';
|
||||
}
|
||||
|
||||
if (text) {
|
||||
// Play the most recent AI message first, then start listening
|
||||
console.log('[Voice Mode] (idle) Reading latest AI message:', text.substring(0, 50) + '...');
|
||||
send({ type: 'AI_RESPONSE_READY', messageId: latestAssistantMessage.id, text });
|
||||
playAudio(text, latestAssistantMessage.id);
|
||||
return;
|
||||
}
|
||||
|
||||
// No text found, just start listening
|
||||
console.log('[Voice Mode] (idle) No text in latest message, starting listening');
|
||||
send({ type: 'START_LISTENING' });
|
||||
startListening();
|
||||
}, [state, messages, send]);
|
||||
|
||||
// Stop audio when leaving audio-related states
|
||||
useEffect(() => {
|
||||
const isInAudioState = state.hasTag('canSkipAudio');
|
||||
|
||||
if (!isInAudioState) {
|
||||
// We're not in an audio state, make sure everything is stopped
|
||||
stopAllAudio();
|
||||
}
|
||||
}, [state, stopAllAudio]);
|
||||
|
||||
// Log state transitions for debugging
|
||||
useEffect(() => {
|
||||
console.log('[Voice Mode] === STATE TRANSITION ===');
|
||||
console.log('[Voice Mode] state.value:', JSON.stringify(state.value));
|
||||
console.log('[Voice Mode] Active tags:', Array.from(state.tags));
|
||||
console.log('[Voice Mode] Context:', JSON.stringify(state.context));
|
||||
}, [state.value]);
|
||||
|
||||
// Add initial greeting message on first load
|
||||
useEffect(() => {
|
||||
if (messages.length === 0) {
|
||||
setMessages([
|
||||
{
|
||||
id: 'initial-greeting',
|
||||
role: 'assistant',
|
||||
parts: [
|
||||
{
|
||||
type: 'text',
|
||||
text: 'Welcome to Ponderants! I\'m here to help you explore and structure your ideas through conversation.\n\nWhat would you like to talk about today? I can adapt my interview style to best suit your needs (Socratic questioning, collaborative brainstorming, or other approaches).\n\nJust start sharing your thoughts, and we\'ll discover meaningful insights together.',
|
||||
},
|
||||
],
|
||||
} as any,
|
||||
]);
|
||||
}
|
||||
}, []);
|
||||
|
||||
// Auto-scroll to bottom
|
||||
useEffect(() => {
|
||||
viewport.current?.scrollTo({
|
||||
top: viewport.current.scrollHeight,
|
||||
behavior: 'smooth',
|
||||
});
|
||||
}, [messages]);
|
||||
|
||||
const handleSubmit = (e: React.FormEvent) => {
|
||||
e.preventDefault();
|
||||
const inputText = state.context.input;
|
||||
if (!inputText.trim() || status === 'submitted' || status === 'streaming') return;
|
||||
|
||||
console.log('[Voice Mode] Submitting message:', inputText);
|
||||
console.log('[Voice Mode] State transition:', state.value);
|
||||
|
||||
sendMessage({ text: inputText });
|
||||
// Clear input via machine context (will be cleared on next state transition)
|
||||
};
|
||||
|
||||
const handleNewConversation = () => {
|
||||
// Clear all messages and reset to initial greeting
|
||||
setMessages([
|
||||
{
|
||||
id: 'initial-greeting',
|
||||
role: 'assistant',
|
||||
parts: [
|
||||
{
|
||||
type: 'text',
|
||||
text: 'Welcome to Ponderants! I\'m here to help you explore and structure your ideas through conversation.\n\nWhat would you like to talk about today? I can adapt my interview style to best suit your needs (Socratic questioning, collaborative brainstorming, or other approaches).\n\nJust start sharing your thoughts, and we\'ll discover meaningful insights together.',
|
||||
},
|
||||
],
|
||||
} as any,
|
||||
]);
|
||||
};
|
||||
|
||||
return (
|
||||
<Container size="md" h="100vh" style={{ display: 'flex', flexDirection: 'column' }}>
|
||||
<Group justify="space-between" py="md">
|
||||
<Title order={2}>
|
||||
Ponderants Interview
|
||||
</Title>
|
||||
<Group gap="md">
|
||||
<Tooltip label="Start a new conversation">
|
||||
<Button
|
||||
variant="subtle"
|
||||
onClick={handleNewConversation}
|
||||
disabled={status === 'submitted' || status === 'streaming'}
|
||||
>
|
||||
New Conversation
|
||||
</Button>
|
||||
</Tooltip>
|
||||
<UserMenu />
|
||||
</Group>
|
||||
</Group>
|
||||
|
||||
<ScrollArea
|
||||
h="100%"
|
||||
style={{ flex: 1 }}
|
||||
viewportRef={viewport}
|
||||
>
|
||||
<Stack gap="md" pb="xl">
|
||||
{messages.map((m) => (
|
||||
<Paper
|
||||
key={m.id}
|
||||
withBorder
|
||||
shadow="md"
|
||||
p="sm"
|
||||
radius="lg"
|
||||
style={{
|
||||
alignSelf: m.role === 'user' ? 'flex-end' : 'flex-start',
|
||||
backgroundColor:
|
||||
m.role === 'user' ? '#343a40' : '#212529',
|
||||
}}
|
||||
w="80%"
|
||||
>
|
||||
<Text fw={700} size="sm">{m.role === 'user' ? 'You' : 'AI'}</Text>
|
||||
{/* Extract text from message parts */}
|
||||
{(() => {
|
||||
if ('parts' in m && Array.isArray((m as any).parts)) {
|
||||
return (m as any).parts.map((part: any, i: number) => {
|
||||
if (part.type === 'text') {
|
||||
return (
|
||||
<Text key={i} style={{ whiteSpace: 'pre-wrap' }}>
|
||||
{part.text}
|
||||
</Text>
|
||||
);
|
||||
}
|
||||
return null;
|
||||
});
|
||||
}
|
||||
return <Text>Message content unavailable</Text>;
|
||||
})()}
|
||||
</Paper>
|
||||
))}
|
||||
|
||||
{/* Typing indicator while AI is generating a response */}
|
||||
{(status === 'submitted' || status === 'streaming') && (
|
||||
<Paper
|
||||
withBorder
|
||||
shadow="md"
|
||||
p="sm"
|
||||
radius="lg"
|
||||
style={{
|
||||
alignSelf: 'flex-start',
|
||||
backgroundColor: '#212529',
|
||||
}}
|
||||
w="80%"
|
||||
>
|
||||
<Text fw={700} size="sm">AI</Text>
|
||||
<Group gap="xs" mt="xs">
|
||||
<Loader size="xs" />
|
||||
<Text size="sm" c="dimmed">Thinking...</Text>
|
||||
</Group>
|
||||
</Paper>
|
||||
)}
|
||||
|
||||
</Stack>
|
||||
</ScrollArea>
|
||||
|
||||
{/* Big Voice Mode Button - shown above text input */}
|
||||
<Paper withBorder p="md" radius="xl" my="md">
|
||||
<Stack gap="sm">
|
||||
<Group gap="sm">
|
||||
<Button
|
||||
onClick={handleToggleVoiceMode}
|
||||
size="xl"
|
||||
radius="xl"
|
||||
h={80}
|
||||
style={{ flex: 1 }}
|
||||
color={
|
||||
state.hasTag('canSkipAudio')
|
||||
? 'blue'
|
||||
: state.hasTag('userSpeaking') || state.hasTag('timingOut')
|
||||
? 'green'
|
||||
: state.hasTag('listening')
|
||||
? 'yellow'
|
||||
: state.hasTag('processing')
|
||||
? 'blue'
|
||||
: 'gray'
|
||||
}
|
||||
variant={!state.hasTag('textMode') && !state.hasTag('voiceIdle') ? 'filled' : 'light'}
|
||||
leftSection={
|
||||
state.hasTag('canSkipAudio') ? (
|
||||
<IconVolume size={32} />
|
||||
) : state.hasTag('userSpeaking') || state.hasTag('timingOut') || state.hasTag('listening') ? (
|
||||
<IconMicrophone size={32} />
|
||||
) : (
|
||||
<IconMicrophone size={32} />
|
||||
)
|
||||
}
|
||||
disabled={status === 'submitted' || status === 'streaming'}
|
||||
>
|
||||
{getVoiceButtonText(state, silenceStartTimeRef.current)}
|
||||
</Button>
|
||||
|
||||
{/* Skip button - shown when audio can be skipped */}
|
||||
{state.hasTag('canSkipAudio') && (
|
||||
<Button
|
||||
onClick={skipAudioAndListen}
|
||||
size="xl"
|
||||
radius="xl"
|
||||
h={80}
|
||||
color="gray"
|
||||
variant="outline"
|
||||
>
|
||||
Skip
|
||||
</Button>
|
||||
)}
|
||||
</Group>
|
||||
|
||||
{/* Test Controls - Development Only */}
|
||||
{process.env.NODE_ENV === 'development' && (
|
||||
<Paper withBorder p="sm" radius="md" style={{ backgroundColor: '#1a1b1e' }}>
|
||||
<Stack gap="xs">
|
||||
<Text size="xs" fw={700} c="dimmed">DEV: State Machine Testing</Text>
|
||||
<Text size="xs" c="dimmed">
|
||||
State: {JSON.stringify(state.value)} | Tags: {Array.from(state.tags).join(', ')}
|
||||
</Text>
|
||||
<Group gap="xs">
|
||||
<Button
|
||||
size="xs"
|
||||
onClick={() => send({ type: 'START_LISTENING' })}
|
||||
disabled={state.hasTag('textMode')}
|
||||
>
|
||||
Start Listening
|
||||
</Button>
|
||||
<Button
|
||||
size="xs"
|
||||
onClick={() => send({ type: 'USER_STARTED_SPEAKING' })}
|
||||
disabled={!state.hasTag('listening')}
|
||||
>
|
||||
Simulate Speech
|
||||
</Button>
|
||||
<Button
|
||||
size="xs"
|
||||
onClick={() => {
|
||||
send({ type: 'FINALIZED_PHRASE', phrase: 'Test message' });
|
||||
}}
|
||||
disabled={!state.hasTag('userSpeaking') && !state.hasTag('listening')}
|
||||
>
|
||||
Add Phrase
|
||||
</Button>
|
||||
<Button
|
||||
size="xs"
|
||||
onClick={() => send({ type: 'SILENCE_TIMEOUT' })}
|
||||
disabled={!state.hasTag('timingOut')}
|
||||
>
|
||||
Trigger Timeout
|
||||
</Button>
|
||||
<Button
|
||||
size="xs"
|
||||
onClick={() => {
|
||||
const testMsg = messages.filter(m => m.role === 'assistant')[0];
|
||||
if (testMsg) {
|
||||
const text = (testMsg as any).parts?.[0]?.text || 'Test AI response';
|
||||
send({ type: 'AI_RESPONSE_READY', messageId: testMsg.id, text });
|
||||
}
|
||||
}}
|
||||
disabled={!state.hasTag('processing')}
|
||||
>
|
||||
Simulate AI Response
|
||||
</Button>
|
||||
</Group>
|
||||
</Stack>
|
||||
</Paper>
|
||||
)}
|
||||
|
||||
{/* Text Input - always available */}
|
||||
<form onSubmit={handleSubmit}>
|
||||
<Group>
|
||||
<TextInput
|
||||
value={state.context.input}
|
||||
onChange={(e) => send({ type: 'TRANSCRIPT_UPDATE', transcript: e.currentTarget.value })}
|
||||
placeholder="Or type your thoughts here..."
|
||||
style={{ flex: 1 }}
|
||||
variant="filled"
|
||||
disabled={!state.hasTag('textMode') && !state.hasTag('voiceIdle')}
|
||||
/>
|
||||
<Button
|
||||
type="submit"
|
||||
radius="xl"
|
||||
loading={status === 'submitted' || status === 'streaming'}
|
||||
disabled={!state.context.input.trim() || (!state.hasTag('textMode') && !state.hasTag('voiceIdle'))}
|
||||
>
|
||||
Send
|
||||
</Button>
|
||||
</Group>
|
||||
</form>
|
||||
</Stack>
|
||||
</Paper>
|
||||
</Container>
|
||||
);
|
||||
}
|
||||
302
app/edit/page.tsx
Normal file
@@ -0,0 +1,302 @@
|
||||
'use client';
|
||||
|
||||
/**
|
||||
* Edit Node Page
|
||||
*
|
||||
* Editor for reviewing and publishing node drafts generated from conversations.
|
||||
* Displays the AI-generated draft and allows editing before publishing.
|
||||
*/
|
||||
|
||||
import {
|
||||
Stack,
|
||||
Title,
|
||||
Text,
|
||||
Paper,
|
||||
TextInput,
|
||||
Textarea,
|
||||
Button,
|
||||
Group,
|
||||
Container,
|
||||
Divider,
|
||||
Checkbox,
|
||||
Badge,
|
||||
Loader,
|
||||
} from '@mantine/core';
|
||||
import { useState, useEffect } from 'react';
|
||||
import { IconDeviceFloppy, IconX, IconRefresh } from '@tabler/icons-react';
|
||||
import { useAppMachine } from '@/hooks/useAppMachine';
|
||||
import { useSelector } from '@xstate/react';
|
||||
import { notifications } from '@mantine/notifications';
|
||||
|
||||
interface SuggestedNode {
|
||||
id: string;
|
||||
title: string;
|
||||
body: string;
|
||||
atp_uri: string;
|
||||
score: number;
|
||||
}
|
||||
|
||||
export default function EditPage() {
|
||||
const appActor = useAppMachine();
|
||||
const pendingDraft = useSelector(appActor, (state) => state.context.pendingNodeDraft);
|
||||
|
||||
const [title, setTitle] = useState('');
|
||||
const [content, setContent] = useState('');
|
||||
const [isPublishing, setIsPublishing] = useState(false);
|
||||
const [suggestedNodes, setSuggestedNodes] = useState<SuggestedNode[]>([]);
|
||||
const [selectedLinks, setSelectedLinks] = useState<string[]>([]);
|
||||
const [isLoadingSuggestions, setIsLoadingSuggestions] = useState(false);
|
||||
|
||||
// Load draft when available
|
||||
useEffect(() => {
|
||||
if (pendingDraft) {
|
||||
setTitle(pendingDraft.title);
|
||||
setContent(pendingDraft.content);
|
||||
}
|
||||
}, [pendingDraft]);
|
||||
|
||||
// Fetch link suggestions when content changes
|
||||
const fetchLinkSuggestions = async () => {
|
||||
if (!content.trim() || content.trim().length < 50) {
|
||||
setSuggestedNodes([]);
|
||||
return;
|
||||
}
|
||||
|
||||
setIsLoadingSuggestions(true);
|
||||
try {
|
||||
const response = await fetch('/api/suggest-links', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
credentials: 'include',
|
||||
body: JSON.stringify({ body: content }),
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error('Failed to fetch suggestions');
|
||||
}
|
||||
|
||||
const suggestions = await response.json();
|
||||
setSuggestedNodes(suggestions);
|
||||
} catch (error) {
|
||||
console.error('[Link Suggestions] Error:', error);
|
||||
} finally {
|
||||
setIsLoadingSuggestions(false);
|
||||
}
|
||||
};
|
||||
|
||||
// Auto-fetch suggestions when content is substantial
|
||||
useEffect(() => {
|
||||
const timer = setTimeout(() => {
|
||||
if (content.trim().length >= 50) {
|
||||
fetchLinkSuggestions();
|
||||
}
|
||||
}, 1000); // Debounce 1 second
|
||||
|
||||
return () => clearTimeout(timer);
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [content]); // fetchLinkSuggestions is stable and doesn't need to be in deps
|
||||
|
||||
const handlePublish = async () => {
|
||||
if (!title.trim() || !content.trim()) {
|
||||
notifications.show({
|
||||
title: 'Missing content',
|
||||
message: 'Please provide both a title and content for your node',
|
||||
color: 'red',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
setIsPublishing(true);
|
||||
|
||||
try {
|
||||
const response = await fetch('/api/nodes', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
credentials: 'include', // Include cookies for authentication
|
||||
body: JSON.stringify({
|
||||
title: title.trim(),
|
||||
body: content.trim(),
|
||||
links: selectedLinks,
|
||||
}),
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
const errorData = await response.json();
|
||||
throw new Error(errorData.error || 'Failed to publish node');
|
||||
}
|
||||
|
||||
const result = await response.json();
|
||||
|
||||
// Show success notification
|
||||
const message = result.warning || 'Your node has been published to your Bluesky account';
|
||||
notifications.show({
|
||||
title: 'Node published!',
|
||||
message,
|
||||
color: result.warning ? 'yellow' : 'green',
|
||||
});
|
||||
|
||||
// Transition back to conversation view
|
||||
// (Galaxy view requires the cache, which may have failed)
|
||||
appActor.send({
|
||||
type: 'CANCEL_EDIT', // Go back to conversation
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('[Publish Node] Error:', error);
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: error instanceof Error ? error.message : 'Failed to publish node',
|
||||
color: 'red',
|
||||
});
|
||||
} finally {
|
||||
setIsPublishing(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleCancel = () => {
|
||||
if (pendingDraft) {
|
||||
appActor.send({ type: 'CANCEL_EDIT' });
|
||||
} else {
|
||||
// Manual node creation - go back to conversation
|
||||
appActor.send({ type: 'NAVIGATE_TO_CONVO' });
|
||||
}
|
||||
};
|
||||
|
||||
const toggleLinkSelection = (nodeId: string) => {
|
||||
setSelectedLinks((prev) =>
|
||||
prev.includes(nodeId)
|
||||
? prev.filter((id) => id !== nodeId)
|
||||
: [...prev, nodeId]
|
||||
);
|
||||
};
|
||||
|
||||
return (
|
||||
<Container size="md" py="xl" style={{ height: '100vh', display: 'flex', flexDirection: 'column' }}>
|
||||
<Stack gap="lg" style={{ flex: 1 }}>
|
||||
<Group justify="space-between">
|
||||
<Title order={2}>Edit Node</Title>
|
||||
<Group gap="md">
|
||||
<Button
|
||||
variant="subtle"
|
||||
color="gray"
|
||||
leftSection={<IconX size={18} />}
|
||||
onClick={handleCancel}
|
||||
disabled={isPublishing}
|
||||
>
|
||||
Cancel
|
||||
</Button>
|
||||
<Button
|
||||
variant="filled"
|
||||
color="blue"
|
||||
leftSection={<IconDeviceFloppy size={18} />}
|
||||
onClick={handlePublish}
|
||||
loading={isPublishing}
|
||||
disabled={!title.trim() || !content.trim()}
|
||||
>
|
||||
Publish Node
|
||||
</Button>
|
||||
</Group>
|
||||
</Group>
|
||||
|
||||
<Paper p="xl" withBorder style={{ flex: 1 }}>
|
||||
<Stack gap="lg">
|
||||
<TextInput
|
||||
label="Title"
|
||||
placeholder="Enter a concise, compelling title"
|
||||
value={title}
|
||||
onChange={(e) => setTitle(e.currentTarget.value)}
|
||||
size="lg"
|
||||
required
|
||||
/>
|
||||
|
||||
<Divider />
|
||||
|
||||
<Textarea
|
||||
label="Content"
|
||||
placeholder="Write your node content in markdown..."
|
||||
value={content}
|
||||
onChange={(e) => setContent(e.currentTarget.value)}
|
||||
minRows={15}
|
||||
autosize
|
||||
required
|
||||
styles={{
|
||||
input: {
|
||||
fontFamily: 'monospace',
|
||||
},
|
||||
}}
|
||||
/>
|
||||
|
||||
{/* Link Suggestions Section */}
|
||||
{content.trim().length >= 50 && (
|
||||
<>
|
||||
<Divider />
|
||||
<Stack gap="sm">
|
||||
<Group justify="space-between">
|
||||
<Title order={4}>Suggested Links</Title>
|
||||
<Group gap="xs">
|
||||
{isLoadingSuggestions && <Loader size="sm" />}
|
||||
<Button
|
||||
size="xs"
|
||||
variant="subtle"
|
||||
leftSection={<IconRefresh size={14} />}
|
||||
onClick={fetchLinkSuggestions}
|
||||
disabled={isLoadingSuggestions}
|
||||
>
|
||||
Refresh
|
||||
</Button>
|
||||
</Group>
|
||||
</Group>
|
||||
|
||||
{suggestedNodes.length === 0 && !isLoadingSuggestions && (
|
||||
<Text size="sm" c="dimmed">
|
||||
No similar nodes found. This will be your first node on this topic!
|
||||
</Text>
|
||||
)}
|
||||
|
||||
{suggestedNodes.map((node) => (
|
||||
<Paper key={node.id} p="sm" withBorder>
|
||||
<Stack gap="xs">
|
||||
<Group gap="xs">
|
||||
<Checkbox
|
||||
checked={selectedLinks.includes(node.id)}
|
||||
onChange={() => toggleLinkSelection(node.id)}
|
||||
/>
|
||||
<div style={{ flex: 1 }}>
|
||||
<Group justify="space-between">
|
||||
<Text fw={600} size="sm">
|
||||
{node.title}
|
||||
</Text>
|
||||
<Badge size="xs" variant="light">
|
||||
{(node.score * 100).toFixed(0)}% similar
|
||||
</Badge>
|
||||
</Group>
|
||||
<Text size="xs" c="dimmed" lineClamp={2}>
|
||||
{node.body}
|
||||
</Text>
|
||||
</div>
|
||||
</Group>
|
||||
</Stack>
|
||||
</Paper>
|
||||
))}
|
||||
</Stack>
|
||||
</>
|
||||
)}
|
||||
|
||||
{pendingDraft?.conversationContext && (
|
||||
<>
|
||||
<Divider />
|
||||
<Paper p="md" withBorder style={{ backgroundColor: '#1a1b1e' }}>
|
||||
<Text size="sm" fw={700} mb="sm">
|
||||
Conversation Context
|
||||
</Text>
|
||||
<Text size="xs" c="dimmed" style={{ whiteSpace: 'pre-wrap' }}>
|
||||
{pendingDraft.conversationContext}
|
||||
</Text>
|
||||
</Paper>
|
||||
</>
|
||||
)}
|
||||
</Stack>
|
||||
</Paper>
|
||||
</Stack>
|
||||
</Container>
|
||||
);
|
||||
}
|
||||
@@ -1,58 +1,19 @@
|
||||
'use client';
|
||||
|
||||
import { Button, Box } from '@mantine/core';
|
||||
import { Suspense, useState } from 'react';
|
||||
import { Box, Text, Stack } from '@mantine/core';
|
||||
import { Suspense } from 'react';
|
||||
import { ThoughtGalaxy } from '@/components/ThoughtGalaxy';
|
||||
import { notifications } from '@mantine/notifications';
|
||||
|
||||
export default function GalaxyPage() {
|
||||
const [isCalculating, setIsCalculating] = useState(false);
|
||||
// This key forces a re-render of the galaxy component
|
||||
const [galaxyKey, setGalaxyKey] = useState(Date.now());
|
||||
|
||||
const handleCalculateGraph = async () => {
|
||||
setIsCalculating(true);
|
||||
try {
|
||||
const response = await fetch('/api/calculate-graph', { method: 'POST' });
|
||||
const data = await response.json();
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(data.error || 'Failed to calculate graph');
|
||||
}
|
||||
|
||||
notifications.show({
|
||||
title: 'Success',
|
||||
message: data.message || `Mapped ${data.nodes_mapped} nodes to 3D space`,
|
||||
color: 'green',
|
||||
});
|
||||
|
||||
// Refresh the galaxy component by changing its key
|
||||
setGalaxyKey(Date.now());
|
||||
} catch (error) {
|
||||
console.error(error);
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: error instanceof Error ? error.message : 'Failed to calculate graph',
|
||||
color: 'red',
|
||||
});
|
||||
} finally {
|
||||
setIsCalculating(false);
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<Box style={{ height: '100vh', width: '100vw', position: 'relative' }}>
|
||||
<Button
|
||||
onClick={handleCalculateGraph}
|
||||
loading={isCalculating}
|
||||
style={{ position: 'absolute', top: 20, left: 20, zIndex: 10 }}
|
||||
>
|
||||
Calculate My Graph
|
||||
</Button>
|
||||
|
||||
{/* R3F Canvas for the 3D visualization */}
|
||||
<Suspense fallback={<Box>Loading 3D Scene...</Box>}>
|
||||
<ThoughtGalaxy key={galaxyKey} />
|
||||
<Suspense fallback={
|
||||
<Stack align="center" justify="center" style={{ height: '100vh' }}>
|
||||
<Text c="dimmed">Loading your thought galaxy...</Text>
|
||||
</Stack>
|
||||
}>
|
||||
<ThoughtGalaxy />
|
||||
</Suspense>
|
||||
</Box>
|
||||
);
|
||||
|
||||
@@ -5,12 +5,17 @@ import { MantineProvider, ColorSchemeScript } from "@mantine/core";
|
||||
import { Notifications } from "@mantine/notifications";
|
||||
import "@mantine/notifications/styles.css";
|
||||
import { theme } from "./theme";
|
||||
import { AppLayout } from "@/components/AppLayout";
|
||||
|
||||
const inter = Inter({ subsets: ["latin"] });
|
||||
|
||||
export const metadata: Metadata = {
|
||||
title: "Ponderants",
|
||||
description: "Your AI Thought Partner",
|
||||
icons: {
|
||||
icon: "/logo.svg",
|
||||
apple: "/logo.svg",
|
||||
},
|
||||
};
|
||||
|
||||
export default function RootLayout({
|
||||
@@ -27,7 +32,7 @@ export default function RootLayout({
|
||||
<body className={inter.className} suppressHydrationWarning>
|
||||
<MantineProvider theme={theme} defaultColorScheme="dark">
|
||||
<Notifications />
|
||||
{children}
|
||||
<AppLayout>{children}</AppLayout>
|
||||
</MantineProvider>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
@@ -25,8 +25,6 @@ export const theme = createTheme({
|
||||
// Set default dark mode and grayscale for the "minimalist" look
|
||||
defaultRadius: 'md',
|
||||
fontFamily: 'Inter, sans-serif',
|
||||
// Enforce dark mode
|
||||
forceColorScheme: 'dark',
|
||||
|
||||
// Set default component props for a consistent look
|
||||
components: {
|
||||
|
||||
59
components/AppLayout.tsx
Normal file
@@ -0,0 +1,59 @@
|
||||
'use client';
|
||||
|
||||
/**
|
||||
* AppLayout Component
|
||||
*
|
||||
* Wraps the application with:
|
||||
* - AppStateMachineProvider (state management)
|
||||
* - Mantine AppShell (responsive layout)
|
||||
* - Navigation (mobile bottom bar / desktop sidebar)
|
||||
*/
|
||||
|
||||
import { AppShell } from '@mantine/core';
|
||||
import { useMediaQuery } from '@mantine/hooks';
|
||||
import { AppStateMachineProvider } from './AppStateMachine';
|
||||
import { MobileBottomBar } from './Navigation/MobileBottomBar';
|
||||
import { MobileHeader } from './Navigation/MobileHeader';
|
||||
import { DesktopSidebar } from './Navigation/DesktopSidebar';
|
||||
|
||||
export function AppLayout({ children }: { children: React.ReactNode }) {
|
||||
const isMobile = useMediaQuery('(max-width: 768px)');
|
||||
|
||||
return (
|
||||
<AppStateMachineProvider>
|
||||
{/* Mobile Header - only on mobile */}
|
||||
{isMobile && <MobileHeader />}
|
||||
|
||||
<AppShell
|
||||
navbar={{
|
||||
width: isMobile ? 0 : 200,
|
||||
breakpoint: 'sm',
|
||||
}}
|
||||
padding={isMobile ? 0 : 'md'}
|
||||
style={{ height: '100vh' }}
|
||||
>
|
||||
{/* Desktop Sidebar - only on desktop */}
|
||||
{!isMobile && (
|
||||
<AppShell.Navbar>
|
||||
<DesktopSidebar />
|
||||
</AppShell.Navbar>
|
||||
)}
|
||||
|
||||
{/* Main Content */}
|
||||
<AppShell.Main
|
||||
style={{
|
||||
height: '100vh',
|
||||
overflow: 'auto',
|
||||
paddingTop: isMobile ? '64px' : '0', // Space for mobile header
|
||||
paddingBottom: isMobile ? '80px' : '0', // Space for mobile bottom bar
|
||||
}}
|
||||
>
|
||||
{children}
|
||||
</AppShell.Main>
|
||||
|
||||
{/* Mobile Bottom Bar - only on mobile */}
|
||||
{isMobile && <MobileBottomBar />}
|
||||
</AppShell>
|
||||
</AppStateMachineProvider>
|
||||
);
|
||||
}
|
||||
108
components/AppStateMachine.tsx
Normal file
@@ -0,0 +1,108 @@
|
||||
'use client';
|
||||
|
||||
/**
|
||||
* AppStateMachine Provider
|
||||
*
|
||||
* Wraps the application with the app-level state machine.
|
||||
* Provides state and send function to all child components via context.
|
||||
* Also handles responsive mode detection and route synchronization.
|
||||
*/
|
||||
|
||||
import { useEffect, useRef } from 'react';
|
||||
import { useSelector } from '@xstate/react';
|
||||
import { createActor } from 'xstate';
|
||||
import { usePathname, useRouter } from 'next/navigation';
|
||||
import { useMediaQuery } from '@mantine/hooks';
|
||||
import { appMachine } from '@/lib/app-machine';
|
||||
import { AppMachineContext } from '@/hooks/useAppMachine';
|
||||
|
||||
// Create the actor singleton outside the component to persist state
|
||||
const appActor = createActor(appMachine);
|
||||
appActor.start();
|
||||
|
||||
export function AppStateMachineProvider({ children }: { children: React.ReactNode }) {
|
||||
const state = useSelector(appActor, (state) => state);
|
||||
const send = appActor.send;
|
||||
const pathname = usePathname();
|
||||
const router = useRouter();
|
||||
|
||||
// Track if this is the initial mount
|
||||
const isInitializedRef = useRef(false);
|
||||
// Track the last path we navigated to, to prevent loops
|
||||
const lastNavigatedPathRef = useRef<string | null>(null);
|
||||
|
||||
// Detect mobile vs desktop
|
||||
const isMobile = useMediaQuery('(max-width: 768px)');
|
||||
|
||||
// Update mode in state machine
|
||||
useEffect(() => {
|
||||
send({ type: 'SET_MODE', mode: isMobile ? 'mobile' : 'desktop' });
|
||||
}, [isMobile, send]);
|
||||
|
||||
// Initialize state machine from URL on first mount ONLY
|
||||
useEffect(() => {
|
||||
if (isInitializedRef.current) return;
|
||||
|
||||
console.log('[App Provider] Initializing state from URL:', pathname);
|
||||
|
||||
// Determine which state the current path corresponds to
|
||||
let initialEvent: string | null = null;
|
||||
|
||||
if (pathname === '/chat') {
|
||||
initialEvent = 'NAVIGATE_TO_CONVO';
|
||||
} else if (pathname === '/edit') {
|
||||
initialEvent = 'NAVIGATE_TO_EDIT';
|
||||
} else if (pathname === '/galaxy') {
|
||||
initialEvent = 'NAVIGATE_TO_GALAXY';
|
||||
}
|
||||
|
||||
// Send the event to initialize state from URL
|
||||
if (initialEvent) {
|
||||
console.log('[App Provider] Setting initial state:', initialEvent);
|
||||
send({ type: initialEvent as any });
|
||||
}
|
||||
|
||||
// Mark as initialized AFTER sending the event
|
||||
isInitializedRef.current = true;
|
||||
}, [pathname, send]); // Remove 'state' from dependencies!
|
||||
|
||||
// State machine is source of truth: sync state → URL only
|
||||
// This effect ONLY runs when state changes, not when pathname changes
|
||||
useEffect(() => {
|
||||
// Don't navigate until initialized
|
||||
if (!isInitializedRef.current) {
|
||||
return;
|
||||
}
|
||||
|
||||
let targetPath: string | null = null;
|
||||
|
||||
if (state.matches('convo')) {
|
||||
targetPath = '/chat';
|
||||
} else if (state.matches('edit')) {
|
||||
targetPath = '/edit';
|
||||
} else if (state.matches('galaxy')) {
|
||||
targetPath = '/galaxy';
|
||||
}
|
||||
|
||||
// ONLY navigate if we have a target path and haven't already navigated to it
|
||||
if (targetPath && targetPath !== lastNavigatedPathRef.current) {
|
||||
console.log('[App Provider] State machine navigating to:', targetPath);
|
||||
lastNavigatedPathRef.current = targetPath;
|
||||
router.push(targetPath);
|
||||
}
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [state.value]); // ONLY depend on state.value, NOT pathname or router!
|
||||
|
||||
// Log state changes
|
||||
useEffect(() => {
|
||||
console.log('[App Provider] State:', state.value);
|
||||
console.log('[App Provider] Tags:', Array.from(state.tags));
|
||||
console.log('[App Provider] Context:', state.context);
|
||||
}, [state]);
|
||||
|
||||
return (
|
||||
<AppMachineContext.Provider value={appActor}>
|
||||
{children}
|
||||
</AppMachineContext.Provider>
|
||||
);
|
||||
}
|
||||
@@ -1,23 +1,19 @@
|
||||
'use client';
|
||||
|
||||
import { useChat } from 'ai';
|
||||
import { useChat } from '@ai-sdk/react';
|
||||
import { Container, ScrollArea, Paper, Group, TextInput, Button, Stack, Text, Box } from '@mantine/core';
|
||||
import { useEffect, useRef } from 'react';
|
||||
import { useEffect, useRef, useState } from 'react';
|
||||
import { MicrophoneRecorder } from './MicrophoneRecorder';
|
||||
|
||||
export function ChatInterface() {
|
||||
const viewport = useRef<HTMLDivElement>(null);
|
||||
const [input, setInput] = useState('');
|
||||
|
||||
const {
|
||||
messages,
|
||||
input,
|
||||
handleInputChange,
|
||||
handleSubmit,
|
||||
setInput,
|
||||
isLoading,
|
||||
} = useChat({
|
||||
api: '/api/chat',
|
||||
});
|
||||
sendMessage,
|
||||
status,
|
||||
} = useChat();
|
||||
|
||||
// Auto-scroll to bottom when new messages arrive
|
||||
useEffect(() => {
|
||||
@@ -57,7 +53,12 @@ export function ChatInterface() {
|
||||
radius="md"
|
||||
bg={message.role === 'user' ? 'dark.6' : 'dark.7'}
|
||||
>
|
||||
<Text size="sm">{message.content}</Text>
|
||||
<Text size="sm">
|
||||
{/* Extract text from parts */}
|
||||
{('parts' in message && Array.isArray((message as any).parts))
|
||||
? (message as any).parts.find((p: any) => p.type === 'text')?.text || ''
|
||||
: (message as any).content || ''}
|
||||
</Text>
|
||||
</Paper>
|
||||
</Box>
|
||||
))}
|
||||
@@ -65,16 +66,21 @@ export function ChatInterface() {
|
||||
</ScrollArea>
|
||||
|
||||
{/* Input area */}
|
||||
<form onSubmit={handleSubmit}>
|
||||
<form onSubmit={(e) => {
|
||||
e.preventDefault();
|
||||
if (!input.trim() || status === 'submitted' || status === 'streaming') return;
|
||||
sendMessage({ text: input });
|
||||
setInput('');
|
||||
}}>
|
||||
<Paper withBorder p="sm" radius="xl">
|
||||
<Group gap="xs">
|
||||
<TextInput
|
||||
value={input}
|
||||
onChange={handleInputChange}
|
||||
onChange={(e) => setInput(e.currentTarget.value)}
|
||||
placeholder="Speak or type your thoughts..."
|
||||
style={{ flex: 1 }}
|
||||
variant="unstyled"
|
||||
disabled={isLoading}
|
||||
disabled={status === 'submitted' || status === 'streaming'}
|
||||
/>
|
||||
|
||||
{/* Microphone Recorder */}
|
||||
@@ -96,7 +102,7 @@ export function ChatInterface() {
|
||||
}}
|
||||
/>
|
||||
|
||||
<Button type="submit" radius="xl" loading={isLoading}>
|
||||
<Button type="submit" radius="xl" loading={status === 'submitted' || status === 'streaming'}>
|
||||
Send
|
||||
</Button>
|
||||
</Group>
|
||||
|
||||
127
components/Navigation/DesktopSidebar.tsx
Normal file
@@ -0,0 +1,127 @@
|
||||
'use client';
|
||||
|
||||
/**
|
||||
* Desktop Sidebar Navigation
|
||||
*
|
||||
* Vertical sidebar navigation for desktop (≥ 768px).
|
||||
* Shows three navigation links: Convo, Edit, Galaxy
|
||||
* Highlights the active mode based on app state machine.
|
||||
*/
|
||||
|
||||
import { Stack, NavLink, Box, Text, Group, Image, Divider } from '@mantine/core';
|
||||
import { IconMessageCircle, IconEdit, IconUniverse } from '@tabler/icons-react';
|
||||
import { useSelector } from '@xstate/react';
|
||||
import { useAppMachine } from '@/hooks/useAppMachine';
|
||||
import { UserMenu } from '@/components/UserMenu';
|
||||
|
||||
export function DesktopSidebar() {
|
||||
const actor = useAppMachine();
|
||||
const state = useSelector(actor, (state) => state);
|
||||
const send = actor.send;
|
||||
|
||||
const handleNavigation = (target: 'convo' | 'edit' | 'galaxy') => {
|
||||
console.log('[Desktop Nav] Navigating to:', target);
|
||||
|
||||
if (target === 'convo') {
|
||||
send({ type: 'NAVIGATE_TO_CONVO' });
|
||||
} else if (target === 'edit') {
|
||||
send({ type: 'NAVIGATE_TO_EDIT' });
|
||||
} else if (target === 'galaxy') {
|
||||
send({ type: 'NAVIGATE_TO_GALAXY' });
|
||||
}
|
||||
};
|
||||
|
||||
const isConvo = state.matches('convo');
|
||||
const isEdit = state.matches('edit');
|
||||
const isGalaxy = state.matches('galaxy');
|
||||
|
||||
console.log('[Desktop Nav] Current state:', state.value, {
|
||||
isConvo,
|
||||
isEdit,
|
||||
isGalaxy,
|
||||
});
|
||||
|
||||
return (
|
||||
<Box
|
||||
style={{
|
||||
width: '100%',
|
||||
height: '100%',
|
||||
borderRight: '1px solid #dee2e6',
|
||||
padding: '1rem',
|
||||
}}
|
||||
>
|
||||
<Stack gap="xs">
|
||||
<Group gap="sm" mb="md" align="center">
|
||||
<Image
|
||||
src="/logo.svg"
|
||||
alt="Ponderants logo"
|
||||
w={48}
|
||||
h={48}
|
||||
style={{ flexShrink: 0 }}
|
||||
/>
|
||||
<Text fw={700} size="md" c="dimmed">
|
||||
Ponderants
|
||||
</Text>
|
||||
</Group>
|
||||
|
||||
<NavLink
|
||||
label="Convo"
|
||||
leftSection={<IconMessageCircle size={20} />}
|
||||
active={isConvo}
|
||||
onClick={() => handleNavigation('convo')}
|
||||
variant="filled"
|
||||
/>
|
||||
|
||||
<NavLink
|
||||
label="Manual"
|
||||
leftSection={<IconEdit size={20} />}
|
||||
active={isEdit}
|
||||
onClick={() => handleNavigation('edit')}
|
||||
variant="filled"
|
||||
/>
|
||||
|
||||
<NavLink
|
||||
label="Galaxy"
|
||||
leftSection={<IconUniverse size={20} />}
|
||||
active={isGalaxy}
|
||||
onClick={() => handleNavigation('galaxy')}
|
||||
variant="filled"
|
||||
/>
|
||||
|
||||
<Divider my="md" />
|
||||
|
||||
<Box style={{ padding: '0.5rem' }}>
|
||||
<UserMenu />
|
||||
</Box>
|
||||
|
||||
{/* Development state panel */}
|
||||
{process.env.NODE_ENV === 'development' && (
|
||||
<Box mt="xl" p="sm" style={{ border: '1px solid #495057', borderRadius: '4px' }}>
|
||||
<Text size="xs" fw={700} c="dimmed" mb="xs">
|
||||
DEV: App State
|
||||
</Text>
|
||||
<Text size="xs" c="dimmed">
|
||||
State: {JSON.stringify(state.value)}
|
||||
</Text>
|
||||
<Text size="xs" c="dimmed">
|
||||
Tags: {Array.from(state.tags).join(', ')}
|
||||
</Text>
|
||||
<Text size="xs" c="dimmed">
|
||||
Mode: {state.context.mode}
|
||||
</Text>
|
||||
{state.context.pendingNodeDraft && (
|
||||
<Text size="xs" c="dimmed">
|
||||
Draft: {state.context.pendingNodeDraft.title || '(untitled)'}
|
||||
</Text>
|
||||
)}
|
||||
{state.context.currentNodeId && (
|
||||
<Text size="xs" c="dimmed">
|
||||
Node: {state.context.currentNodeId}
|
||||
</Text>
|
||||
)}
|
||||
</Box>
|
||||
)}
|
||||
</Stack>
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
95
components/Navigation/MobileBottomBar.tsx
Normal file
@@ -0,0 +1,95 @@
|
||||
'use client';
|
||||
|
||||
/**
|
||||
* Mobile Bottom Bar Navigation
|
||||
*
|
||||
* Fixed bottom navigation for mobile devices (< 768px).
|
||||
* Shows three buttons: Convo, Edit, Galaxy
|
||||
* Highlights the active mode based on app state machine.
|
||||
*/
|
||||
|
||||
import { Group, Button, Paper, ActionIcon, Box } from '@mantine/core';
|
||||
import { IconMessageCircle, IconEdit, IconUniverse, IconUser } from '@tabler/icons-react';
|
||||
import { useSelector } from '@xstate/react';
|
||||
import { useAppMachine } from '@/hooks/useAppMachine';
|
||||
import { UserMenu } from '@/components/UserMenu';
|
||||
|
||||
export function MobileBottomBar() {
|
||||
const actor = useAppMachine();
|
||||
const state = useSelector(actor, (state) => state);
|
||||
const send = actor.send;
|
||||
|
||||
const handleNavigation = (target: 'convo' | 'edit' | 'galaxy') => {
|
||||
console.log('[Mobile Nav] Navigating to:', target);
|
||||
|
||||
if (target === 'convo') {
|
||||
send({ type: 'NAVIGATE_TO_CONVO' });
|
||||
} else if (target === 'edit') {
|
||||
send({ type: 'NAVIGATE_TO_EDIT' });
|
||||
} else if (target === 'galaxy') {
|
||||
send({ type: 'NAVIGATE_TO_GALAXY' });
|
||||
}
|
||||
};
|
||||
|
||||
const isConvo = state.matches('convo');
|
||||
const isEdit = state.matches('edit');
|
||||
const isGalaxy = state.matches('galaxy');
|
||||
|
||||
console.log('[Mobile Nav] Current state:', state.value, {
|
||||
isConvo,
|
||||
isEdit,
|
||||
isGalaxy,
|
||||
});
|
||||
|
||||
return (
|
||||
<Paper
|
||||
withBorder
|
||||
p="md"
|
||||
radius={0}
|
||||
style={{
|
||||
position: 'fixed',
|
||||
bottom: 0,
|
||||
left: 0,
|
||||
right: 0,
|
||||
zIndex: 100,
|
||||
borderTop: '1px solid #dee2e6',
|
||||
}}
|
||||
>
|
||||
<Group justify="space-around" grow>
|
||||
<ActionIcon
|
||||
variant={isConvo ? 'filled' : 'subtle'}
|
||||
color={isConvo ? 'blue' : 'gray'}
|
||||
onClick={() => handleNavigation('convo')}
|
||||
size={48}
|
||||
radius="md"
|
||||
>
|
||||
<IconMessageCircle size={24} />
|
||||
</ActionIcon>
|
||||
|
||||
<ActionIcon
|
||||
variant={isEdit ? 'filled' : 'subtle'}
|
||||
color={isEdit ? 'blue' : 'gray'}
|
||||
onClick={() => handleNavigation('edit')}
|
||||
size={48}
|
||||
radius="md"
|
||||
>
|
||||
<IconEdit size={24} />
|
||||
</ActionIcon>
|
||||
|
||||
<ActionIcon
|
||||
variant={isGalaxy ? 'filled' : 'subtle'}
|
||||
color={isGalaxy ? 'blue' : 'gray'}
|
||||
onClick={() => handleNavigation('galaxy')}
|
||||
size={48}
|
||||
radius="md"
|
||||
>
|
||||
<IconUniverse size={24} />
|
||||
</ActionIcon>
|
||||
|
||||
<Box style={{ display: 'flex', alignItems: 'center', justifyContent: 'center' }}>
|
||||
<UserMenu />
|
||||
</Box>
|
||||
</Group>
|
||||
</Paper>
|
||||
);
|
||||
}
|
||||
40
components/Navigation/MobileHeader.tsx
Normal file
@@ -0,0 +1,40 @@
|
||||
'use client';
|
||||
|
||||
/**
|
||||
* Mobile Header
|
||||
*
|
||||
* Fixed header for mobile devices showing the Ponderants logo.
|
||||
*/
|
||||
|
||||
import { Group, Image, Text, Paper } from '@mantine/core';
|
||||
|
||||
export function MobileHeader() {
|
||||
return (
|
||||
<Paper
|
||||
withBorder
|
||||
p="md"
|
||||
radius={0}
|
||||
style={{
|
||||
position: 'fixed',
|
||||
top: 0,
|
||||
left: 0,
|
||||
right: 0,
|
||||
zIndex: 100,
|
||||
borderBottom: '1px solid #dee2e6',
|
||||
}}
|
||||
>
|
||||
<Group gap="sm" align="center">
|
||||
<Image
|
||||
src="/logo.svg"
|
||||
alt="Ponderants logo"
|
||||
w={56}
|
||||
h={56}
|
||||
style={{ flexShrink: 0 }}
|
||||
/>
|
||||
<Text fw={700} size="xl">
|
||||
Ponderants
|
||||
</Text>
|
||||
</Group>
|
||||
</Paper>
|
||||
);
|
||||
}
|
||||
@@ -7,9 +7,10 @@ import {
|
||||
Text,
|
||||
} from '@react-three/drei';
|
||||
import { Suspense, useEffect, useRef, useState } from 'react';
|
||||
import Surreal from 'surrealdb';
|
||||
import { Stack, Text as MantineText } from '@mantine/core';
|
||||
import * as THREE from 'three';
|
||||
|
||||
// Define the shape of nodes and links from DB
|
||||
// Define the shape of nodes and links from API
|
||||
interface NodeData {
|
||||
id: string;
|
||||
title: string;
|
||||
@@ -67,42 +68,95 @@ export function ThoughtGalaxy() {
|
||||
const [nodes, setNodes] = useState<NodeData[]>([]);
|
||||
const [links, setLinks] = useState<LinkData[]>([]);
|
||||
const cameraControlsRef = useRef<CameraControls>(null);
|
||||
const hasFitCamera = useRef(false);
|
||||
|
||||
// Fetch data from SurrealDB on mount
|
||||
// Fetch data from API on mount and poll for updates
|
||||
useEffect(() => {
|
||||
async function fetchData() {
|
||||
// Client-side connection
|
||||
const db = new Surreal();
|
||||
await db.connect(process.env.NEXT_PUBLIC_SURREALDB_WSS_URL!);
|
||||
try {
|
||||
const response = await fetch('/api/galaxy', {
|
||||
credentials: 'include', // Include cookies for authentication
|
||||
});
|
||||
|
||||
// Get the token from the cookie
|
||||
const tokenCookie = document.cookie
|
||||
.split('; ')
|
||||
.find((row) => row.startsWith('ponderants-auth='));
|
||||
if (!response.ok) {
|
||||
console.error('[ThoughtGalaxy] Failed to fetch galaxy data:', response.statusText);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!tokenCookie) {
|
||||
console.error('[ThoughtGalaxy] No auth token found');
|
||||
return;
|
||||
const data = await response.json();
|
||||
|
||||
if (data.message) {
|
||||
console.log('[ThoughtGalaxy]', data.message);
|
||||
// If calculating, poll again in 2 seconds
|
||||
setTimeout(fetchData, 2000);
|
||||
return;
|
||||
}
|
||||
|
||||
setNodes(data.nodes || []);
|
||||
setLinks(data.links || []);
|
||||
|
||||
console.log(`[ThoughtGalaxy] Loaded ${data.nodes?.length || 0} nodes and ${data.links?.length || 0} links`);
|
||||
} catch (error) {
|
||||
console.error('[ThoughtGalaxy] Error fetching data:', error);
|
||||
}
|
||||
|
||||
const token = tokenCookie.split('=')[1];
|
||||
await db.authenticate(token);
|
||||
|
||||
// Fetch nodes that have coordinates
|
||||
const nodeResults = await db.query<[NodeData[]]>(
|
||||
'SELECT id, title, coords_3d FROM node WHERE coords_3d != NONE'
|
||||
);
|
||||
setNodes(nodeResults[0] || []);
|
||||
|
||||
// Fetch links
|
||||
const linkResults = await db.query<[LinkData[]]>('SELECT in, out FROM links_to');
|
||||
setLinks(linkResults[0] || []);
|
||||
|
||||
console.log(`[ThoughtGalaxy] Loaded ${nodeResults[0]?.length || 0} nodes and ${linkResults[0]?.length || 0} links`);
|
||||
}
|
||||
|
||||
fetchData();
|
||||
}, []);
|
||||
|
||||
// Function to fit camera to all nodes
|
||||
const fitCameraToNodes = () => {
|
||||
if (!cameraControlsRef.current || nodes.length === 0) {
|
||||
console.log('[ThoughtGalaxy] Cannot fit camera:', {
|
||||
hasRef: !!cameraControlsRef.current,
|
||||
nodesLength: nodes.length,
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
console.log('[ThoughtGalaxy] Fitting camera to', nodes.length, 'nodes...');
|
||||
|
||||
// Create a THREE.Box3 from node positions
|
||||
const box = new THREE.Box3();
|
||||
nodes.forEach((node) => {
|
||||
box.expandByPoint(new THREE.Vector3(
|
||||
node.coords_3d[0],
|
||||
node.coords_3d[1],
|
||||
node.coords_3d[2]
|
||||
));
|
||||
});
|
||||
|
||||
console.log('[ThoughtGalaxy] Bounding box:', {
|
||||
min: box.min,
|
||||
max: box.max,
|
||||
size: box.getSize(new THREE.Vector3()),
|
||||
});
|
||||
|
||||
// Use CameraControls' built-in fitToBox method
|
||||
try {
|
||||
cameraControlsRef.current.fitToBox(
|
||||
box,
|
||||
false, // Don't animate on initial load
|
||||
{ paddingLeft: 0.5, paddingRight: 0.5, paddingTop: 0.5, paddingBottom: 0.5 }
|
||||
);
|
||||
console.log('[ThoughtGalaxy] ✓ Camera fitted to bounds');
|
||||
hasFitCamera.current = true;
|
||||
} catch (error) {
|
||||
console.error('[ThoughtGalaxy] Error fitting camera:', error);
|
||||
}
|
||||
};
|
||||
|
||||
// Fit camera when nodes change and we haven't fitted yet
|
||||
useEffect(() => {
|
||||
if (!hasFitCamera.current && nodes.length > 0) {
|
||||
// Try to fit after a short delay to ensure Canvas is ready
|
||||
const timer = setTimeout(() => {
|
||||
fitCameraToNodes();
|
||||
}, 100);
|
||||
return () => clearTimeout(timer);
|
||||
}
|
||||
}, [nodes]);
|
||||
|
||||
// Map links to node positions
|
||||
const linkLines = links
|
||||
.map((link) => {
|
||||
@@ -118,24 +172,48 @@ export function ThoughtGalaxy() {
|
||||
})
|
||||
.filter(Boolean) as { start: [number, number, number]; end: [number, number, number] }[];
|
||||
|
||||
// Camera animation
|
||||
// Camera animation on node click
|
||||
const handleNodeClick = (node: NodeData) => {
|
||||
if (cameraControlsRef.current) {
|
||||
cameraControlsRef.current.smoothTime = 0.8;
|
||||
cameraControlsRef.current.setLookAt(
|
||||
node.coords_3d[0] + 1,
|
||||
node.coords_3d[1] + 1,
|
||||
node.coords_3d[2] + 1,
|
||||
// Smoothly move to look at the clicked node
|
||||
cameraControlsRef.current.moveTo(
|
||||
node.coords_3d[0],
|
||||
node.coords_3d[1],
|
||||
node.coords_3d[2],
|
||||
true // Enable smooth transition
|
||||
true // Animate
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
console.log('[ThoughtGalaxy] Rendering with', nodes.length, 'nodes and', linkLines.length, 'link lines');
|
||||
|
||||
// Show message if no nodes are ready yet
|
||||
if (nodes.length === 0) {
|
||||
return (
|
||||
<Stack align="center" justify="center" style={{ height: '100vh', width: '100vw' }}>
|
||||
<MantineText size="lg" c="dimmed">
|
||||
Create at least 3 nodes to visualize your thought galaxy
|
||||
</MantineText>
|
||||
<MantineText size="sm" c="dimmed">
|
||||
Nodes with content will automatically generate embeddings and 3D coordinates
|
||||
</MantineText>
|
||||
</Stack>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<Canvas camera={{ position: [0, 5, 10], fov: 60 }}>
|
||||
<Canvas
|
||||
camera={{ position: [0, 5, 10], fov: 60 }}
|
||||
style={{ width: '100%', height: '100%' }}
|
||||
gl={{ preserveDrawingBuffer: true }}
|
||||
onCreated={(state) => {
|
||||
console.log('[ThoughtGalaxy] Canvas created successfully');
|
||||
// Try to fit camera now that scene is ready
|
||||
if (!hasFitCamera.current && nodes.length > 0) {
|
||||
setTimeout(() => fitCameraToNodes(), 50);
|
||||
}
|
||||
}}
|
||||
>
|
||||
<ambientLight intensity={0.5} />
|
||||
<pointLight position={[10, 10, 10]} intensity={1} />
|
||||
<CameraControls ref={cameraControlsRef} />
|
||||
|
||||
349
docs/fixes/galaxy-graph-fix.md
Normal file
@@ -0,0 +1,349 @@
|
||||
# Galaxy Graph Visualization Fix
|
||||
|
||||
## Problems
|
||||
|
||||
1. **Invalid URL Error**: ThoughtGalaxy component was failing with:
|
||||
```
|
||||
TypeError: Failed to construct 'URL': Invalid URL
|
||||
at parseUrl (surreal.ts:745:14)
|
||||
at Surreal.connectInner (surreal.ts:93:20)
|
||||
at Surreal.connect (surreal.ts:84:22)
|
||||
at fetchData (ThoughtGalaxy.tsx:76:16)
|
||||
```
|
||||
|
||||
2. **Manual Calculation Required**: Users had to manually click "Calculate My Graph" button to trigger UMAP dimensionality reduction
|
||||
|
||||
3. **"Not enough nodes" despite having 3+ nodes**: System was reporting insufficient nodes even after creating 3+ nodes with content
|
||||
|
||||
## Root Causes
|
||||
|
||||
### 1. Client-Side Database Connection
|
||||
The `ThoughtGalaxy.tsx` client component was attempting to connect directly to SurrealDB:
|
||||
|
||||
```typescript
|
||||
// ❌ Wrong: Client component trying to connect to database
|
||||
import Surreal from 'surrealdb';
|
||||
|
||||
useEffect(() => {
|
||||
const db = new Surreal();
|
||||
await db.connect(process.env.NEXT_PUBLIC_SURREALDB_WSS_URL!); // undefined!
|
||||
// ...
|
||||
}, []);
|
||||
```
|
||||
|
||||
Problems:
|
||||
- `NEXT_PUBLIC_SURREALDB_WSS_URL` environment variable didn't exist
|
||||
- Client components shouldn't connect directly to databases (security/architecture violation)
|
||||
- No authentication handling on client side
|
||||
|
||||
### 2. Manual Trigger Required
|
||||
Graph calculation only happened when user clicked a button. No automatic detection of when calculation was needed.
|
||||
|
||||
### 3. Connection Method Inconsistency
|
||||
The `calculate-graph` route was using inline database connection instead of the shared `connectToDB()` helper, leading to potential authentication mismatches.
|
||||
|
||||
## Solutions
|
||||
|
||||
### 1. Created Server-Side Galaxy API Route
|
||||
Created `/app/api/galaxy/route.ts` to handle all database access server-side:
|
||||
|
||||
```typescript
|
||||
export async function GET(request: NextRequest) {
|
||||
const cookieStore = await cookies();
|
||||
const surrealJwt = cookieStore.get('ponderants-auth')?.value;
|
||||
|
||||
if (!surrealJwt) {
|
||||
return NextResponse.json({ error: 'Not authenticated' }, { status: 401 });
|
||||
}
|
||||
|
||||
const userSession = verifySurrealJwt(surrealJwt);
|
||||
if (!userSession) {
|
||||
return NextResponse.json({ error: 'Invalid auth token' }, { status: 401 });
|
||||
}
|
||||
|
||||
const { did: userDid } = userSession;
|
||||
|
||||
try {
|
||||
const db = await connectToDB();
|
||||
|
||||
// Fetch nodes that have 3D coordinates
|
||||
const nodesQuery = `
|
||||
SELECT id, title, coords_3d
|
||||
FROM node
|
||||
WHERE user_did = $userDid AND coords_3d != NONE
|
||||
`;
|
||||
const nodeResults = await db.query<[NodeData[]]>(nodesQuery, { userDid });
|
||||
const nodes = nodeResults[0] || [];
|
||||
|
||||
// Fetch links between nodes
|
||||
const linksQuery = `
|
||||
SELECT in, out
|
||||
FROM links_to
|
||||
`;
|
||||
const linkResults = await db.query<[LinkData[]]>(linksQuery);
|
||||
const links = linkResults[0] || [];
|
||||
|
||||
// Auto-trigger calculation if needed
|
||||
if (nodes.length === 0) {
|
||||
const unmappedQuery = `
|
||||
SELECT count() as count
|
||||
FROM node
|
||||
WHERE user_did = $userDid AND embedding != NONE AND coords_3d = NONE
|
||||
GROUP ALL
|
||||
`;
|
||||
const unmappedResults = await db.query<[Array<{ count: number }>]>(unmappedQuery, { userDid });
|
||||
const unmappedCount = unmappedResults[0]?.[0]?.count || 0;
|
||||
|
||||
if (unmappedCount >= 3) {
|
||||
console.log(`[Galaxy API] Found ${unmappedCount} unmapped nodes, triggering calculation...`);
|
||||
|
||||
// Trigger graph calculation (don't await, return current state)
|
||||
fetch(`${process.env.NEXT_PUBLIC_BASE_URL || 'http://localhost:3000'}/api/calculate-graph`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Cookie': `ponderants-auth=${surrealJwt}`,
|
||||
},
|
||||
}).catch((err) => {
|
||||
console.error('[Galaxy API] Failed to trigger graph calculation:', err);
|
||||
});
|
||||
|
||||
return NextResponse.json({
|
||||
nodes: [],
|
||||
links: [],
|
||||
message: 'Calculating 3D coordinates... Refresh in a moment.',
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`[Galaxy API] Returning ${nodes.length} nodes and ${links.length} links`);
|
||||
|
||||
return NextResponse.json({
|
||||
nodes,
|
||||
links,
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('[Galaxy API] Error:', error);
|
||||
return NextResponse.json(
|
||||
{ error: 'Failed to fetch galaxy data' },
|
||||
{ status: 500 }
|
||||
);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Key features:
|
||||
- ✅ Server-side authentication with JWT verification
|
||||
- ✅ Data isolation via `user_did` filtering
|
||||
- ✅ Auto-detection of unmapped nodes
|
||||
- ✅ Automatic triggering of UMAP calculation
|
||||
- ✅ Progress messaging for client polling
|
||||
|
||||
### 2. Updated ThoughtGalaxy Component
|
||||
Changed from direct database connection to API-based data fetching:
|
||||
|
||||
**Before:**
|
||||
```typescript
|
||||
import Surreal from 'surrealdb';
|
||||
|
||||
useEffect(() => {
|
||||
async function fetchData() {
|
||||
const db = new Surreal();
|
||||
await db.connect(process.env.NEXT_PUBLIC_SURREALDB_WSS_URL!);
|
||||
const token = document.cookie.split('ponderants-auth=')[1];
|
||||
await db.authenticate(token);
|
||||
|
||||
const nodeResults = await db.query('SELECT id, title, coords_3d FROM node...');
|
||||
setNodes(nodeResults[0] || []);
|
||||
}
|
||||
fetchData();
|
||||
}, []);
|
||||
```
|
||||
|
||||
**After:**
|
||||
```typescript
|
||||
// No Surreal import needed
|
||||
|
||||
useEffect(() => {
|
||||
async function fetchData() {
|
||||
try {
|
||||
const response = await fetch('/api/galaxy', {
|
||||
credentials: 'include', // Include cookies for authentication
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
console.error('[ThoughtGalaxy] Failed to fetch galaxy data:', response.statusText);
|
||||
return;
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
|
||||
if (data.message) {
|
||||
console.log('[ThoughtGalaxy]', data.message);
|
||||
// If calculating, poll again in 2 seconds
|
||||
setTimeout(fetchData, 2000);
|
||||
return;
|
||||
}
|
||||
|
||||
setNodes(data.nodes || []);
|
||||
setLinks(data.links || []);
|
||||
|
||||
console.log(`[ThoughtGalaxy] Loaded ${data.nodes?.length || 0} nodes and ${data.links?.length || 0} links`);
|
||||
} catch (error) {
|
||||
console.error('[ThoughtGalaxy] Error fetching data:', error);
|
||||
}
|
||||
}
|
||||
|
||||
fetchData();
|
||||
}, []);
|
||||
```
|
||||
|
||||
Key improvements:
|
||||
- ✅ No client-side database connection
|
||||
- ✅ Proper authentication via HTTP-only cookies
|
||||
- ✅ Polling mechanism for in-progress calculations
|
||||
- ✅ Better error handling
|
||||
|
||||
### 3. Fixed calculate-graph Route
|
||||
Updated `/app/api/calculate-graph/route.ts` to use shared helpers:
|
||||
|
||||
**Before:**
|
||||
```typescript
|
||||
const db = new (await import('surrealdb')).default();
|
||||
await db.connect(process.env.SURREALDB_URL!);
|
||||
await db.signin({
|
||||
username: process.env.SURREALDB_USER!,
|
||||
password: process.env.SURREALDB_PASS!,
|
||||
});
|
||||
|
||||
const jwt = require('jsonwebtoken');
|
||||
const decoded = jwt.decode(surrealJwt) as { did: string };
|
||||
const userDid = decoded?.did;
|
||||
```
|
||||
|
||||
**After:**
|
||||
```typescript
|
||||
import { connectToDB } from '@/lib/db';
|
||||
import { verifySurrealJwt } from '@/lib/auth/jwt';
|
||||
|
||||
const userSession = verifySurrealJwt(surrealJwt);
|
||||
if (!userSession) {
|
||||
return NextResponse.json({ error: 'Invalid auth token' }, { status: 401 });
|
||||
}
|
||||
|
||||
const { did: userDid } = userSession;
|
||||
|
||||
const db = await connectToDB();
|
||||
```
|
||||
|
||||
Benefits:
|
||||
- ✅ Consistent authentication across all routes
|
||||
- ✅ Proper JWT verification (not just decode)
|
||||
- ✅ Reusable code (DRY principle)
|
||||
|
||||
### 4. Created Debug Endpoint
|
||||
Added `/app/api/debug/nodes/route.ts` for database inspection:
|
||||
|
||||
```typescript
|
||||
export async function GET(request: NextRequest) {
|
||||
const cookieStore = await cookies();
|
||||
const surrealJwt = cookieStore.get('ponderants-auth')?.value;
|
||||
|
||||
if (!surrealJwt) {
|
||||
return NextResponse.json({ error: 'Not authenticated' }, { status: 401 });
|
||||
}
|
||||
|
||||
const userSession = verifySurrealJwt(surrealJwt);
|
||||
if (!userSession) {
|
||||
return NextResponse.json({ error: 'Invalid auth token' }, { status: 401 });
|
||||
}
|
||||
|
||||
const { did: userDid } = userSession;
|
||||
|
||||
try {
|
||||
const db = await connectToDB();
|
||||
|
||||
const nodesQuery = `
|
||||
SELECT id, title, body, atp_uri, embedding, coords_3d
|
||||
FROM node
|
||||
WHERE user_did = $userDid
|
||||
`;
|
||||
const results = await db.query(nodesQuery, { userDid });
|
||||
const nodes = results[0] || [];
|
||||
|
||||
const stats = {
|
||||
total: nodes.length,
|
||||
with_embeddings: nodes.filter((n: any) => n.embedding).length,
|
||||
with_coords: nodes.filter((n: any) => n.coords_3d).length,
|
||||
without_embeddings: nodes.filter((n: any) => !n.embedding).length,
|
||||
without_coords: nodes.filter((n: any) => !n.coords_3d).length,
|
||||
};
|
||||
|
||||
return NextResponse.json({
|
||||
stats,
|
||||
nodes: nodes.map((n: any) => ({
|
||||
id: n.id,
|
||||
title: n.title,
|
||||
atp_uri: n.atp_uri,
|
||||
has_embedding: !!n.embedding,
|
||||
has_coords: !!n.coords_3d,
|
||||
coords_3d: n.coords_3d,
|
||||
})),
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('[Debug Nodes] Error:', error);
|
||||
return NextResponse.json({ error: String(error) }, { status: 500 });
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Use: Visit `/api/debug/nodes` while logged in to see your node statistics and data.
|
||||
|
||||
## Auto-Calculation Flow
|
||||
|
||||
1. **User visits Galaxy page** → ThoughtGalaxy component mounts
|
||||
2. **Component fetches data** → `GET /api/galaxy`
|
||||
3. **API checks for coords** → Query: `WHERE coords_3d != NONE`
|
||||
4. **If no coords found** → Query unmapped count: `WHERE embedding != NONE AND coords_3d = NONE`
|
||||
5. **If ≥3 unmapped nodes** → Trigger `POST /api/calculate-graph` (don't wait)
|
||||
6. **Return progress message** → `{ message: 'Calculating 3D coordinates...' }`
|
||||
7. **Client polls** → setTimeout 2 seconds, fetch again
|
||||
8. **UMAP completes** → Next poll returns actual node data
|
||||
9. **Client renders** → 3D visualization appears
|
||||
|
||||
## Files Changed
|
||||
|
||||
1. `/components/ThoughtGalaxy.tsx` - Removed direct DB connection, added API-based fetching and polling
|
||||
2. `/app/api/galaxy/route.ts` - **NEW** - Server-side galaxy data endpoint with auto-calculation
|
||||
3. `/app/api/calculate-graph/route.ts` - Updated to use `connectToDB()` and `verifySurrealJwt()`
|
||||
4. `/app/api/debug/nodes/route.ts` - **NEW** - Debug endpoint for inspecting node data
|
||||
|
||||
## Verification
|
||||
|
||||
After the fix:
|
||||
|
||||
```bash
|
||||
# Server logs show auto-calculation:
|
||||
[Galaxy API] Found 5 unmapped nodes, triggering calculation...
|
||||
[Calculate Graph] Processing 5 nodes for UMAP projection
|
||||
[Calculate Graph] Running UMAP dimensionality reduction...
|
||||
[Calculate Graph] ✓ UMAP projection complete
|
||||
[Calculate Graph] ✓ Updated 5 nodes with 3D coordinates
|
||||
```
|
||||
|
||||
```bash
|
||||
# Client logs show polling:
|
||||
[ThoughtGalaxy] Calculating 3D coordinates... Refresh in a moment.
|
||||
[ThoughtGalaxy] Loaded 5 nodes and 3 links
|
||||
```
|
||||
|
||||
## Architecture Note
|
||||
|
||||
This fix maintains the "Source of Truth vs. App View Cache" pattern:
|
||||
- **ATproto PDS** - Canonical source of Node content (com.ponderants.node records)
|
||||
- **SurrealDB** - Performance cache that stores:
|
||||
- Copy of node data for fast access
|
||||
- Vector embeddings for similarity search
|
||||
- Pre-computed 3D coordinates for visualization
|
||||
- Graph links between nodes
|
||||
|
||||
The auto-calculation ensures the cache stays enriched with visualization data without user intervention.
|
||||
122
docs/fixes/surrealdb-cache-fix.md
Normal file
@@ -0,0 +1,122 @@
|
||||
# SurrealDB Cache Authentication Fix
|
||||
|
||||
## Problem
|
||||
|
||||
Node publishing was showing a yellow warning notification:
|
||||
```
|
||||
Node published to Bluesky, but cache update failed. Advanced features may be unavailable.
|
||||
```
|
||||
|
||||
Server logs showed:
|
||||
```
|
||||
[POST /api/nodes] ⚠ SurrealDB cache write failed (non-critical):
|
||||
Error [ResponseError]: There was a problem with the database: There was a problem with authentication
|
||||
```
|
||||
|
||||
## Root Cause
|
||||
|
||||
The `connectToDB()` function in `lib/db.ts` was attempting to authenticate with SurrealDB using our application's custom JWT token:
|
||||
|
||||
```typescript
|
||||
await db.authenticate(token);
|
||||
```
|
||||
|
||||
However, SurrealDB doesn't know how to validate our custom JWT tokens. The `db.authenticate()` method is for SurrealDB's own token-based authentication system, not for validating external JWTs.
|
||||
|
||||
## Solution
|
||||
|
||||
Changed `connectToDB()` to use root credentials instead:
|
||||
|
||||
### Before:
|
||||
```typescript
|
||||
export async function connectToDB(token: string): Promise<Surreal> {
|
||||
const db = new Surreal();
|
||||
await db.connect(SURREALDB_URL);
|
||||
await db.authenticate(token); // ❌ This fails
|
||||
await db.use({ namespace, database });
|
||||
return db;
|
||||
}
|
||||
```
|
||||
|
||||
### After:
|
||||
```typescript
|
||||
export async function connectToDB(): Promise<Surreal> {
|
||||
const db = new Surreal();
|
||||
await db.connect(SURREALDB_URL);
|
||||
await db.signin({
|
||||
username: SURREALDB_USER, // ✅ Use root credentials
|
||||
password: SURREALDB_PASS,
|
||||
});
|
||||
await db.use({ namespace, database });
|
||||
return db;
|
||||
}
|
||||
```
|
||||
|
||||
## Data Security
|
||||
|
||||
Since we're now using root credentials instead of JWT-based authentication, we maintain data isolation by:
|
||||
|
||||
1. **Extracting user_did from the verified JWT** in API routes
|
||||
2. **Filtering all queries by user_did** to ensure users only access their own data
|
||||
|
||||
Example from `/app/api/nodes/route.ts`:
|
||||
```typescript
|
||||
// Verify JWT to get user's DID
|
||||
const userSession = verifySurrealJwt(surrealJwt);
|
||||
const { did: userDid } = userSession;
|
||||
|
||||
// Create node with user_did field
|
||||
const nodeData = {
|
||||
user_did: userDid, // ✅ Enforces data ownership
|
||||
atp_uri: atp_uri,
|
||||
title: title,
|
||||
body: body,
|
||||
embedding: embedding,
|
||||
};
|
||||
await db.create('node', nodeData);
|
||||
```
|
||||
|
||||
Example from `/app/api/suggest-links/route.ts`:
|
||||
```typescript
|
||||
// Query filtered by user_did
|
||||
const query = `
|
||||
SELECT * FROM node
|
||||
WHERE user_did = $user_did // ✅ Only user's own nodes
|
||||
ORDER BY score DESC
|
||||
LIMIT 5;
|
||||
`;
|
||||
await db.query(query, { user_did: userDid });
|
||||
```
|
||||
|
||||
## Files Changed
|
||||
|
||||
1. `/lib/db.ts` - Changed authentication method
|
||||
2. `/app/api/nodes/route.ts` - Removed JWT parameter from `connectToDB()` call
|
||||
3. `/app/api/suggest-links/route.ts` - Updated to use root credentials and filter by `user_did`
|
||||
|
||||
## Test
|
||||
|
||||
Created `/tests/magnitude/cache-success.mag.ts` to verify:
|
||||
- Node publishes successfully
|
||||
- GREEN success notification (not yellow warning)
|
||||
- No "cache update failed" message
|
||||
|
||||
## Verification
|
||||
|
||||
After the fix, server logs show:
|
||||
```
|
||||
[POST /api/nodes] ✓ Published to ATproto PDS
|
||||
[POST /api/nodes] ✓ Generated embedding vector
|
||||
[POST /api/nodes] ✓ Cached node in SurrealDB
|
||||
POST /api/nodes 200 in 1078ms
|
||||
```
|
||||
|
||||
No more authentication errors! 🎉
|
||||
|
||||
## Architecture Note
|
||||
|
||||
This implements the "App View Cache" pattern where:
|
||||
- **ATproto PDS** is the source of truth (decentralized, user-owned)
|
||||
- **SurrealDB** is a performance cache (centralized, app-managed)
|
||||
|
||||
The cache uses root credentials but enforces data isolation through `user_did` filtering in application code, similar to how the OAuth session store works (`lib/auth/oauth-session-store.ts`).
|
||||
@@ -1,208 +1,33 @@
|
||||
# **File: COMMIT\_10\_LINKING.md**
|
||||
# **File: COMMIT\_11\_VIZ.md**
|
||||
|
||||
## **Commit 10: Node Editor & AI-Powered Linking**
|
||||
## **Commit 11: 3D "Thought Galaxy" Visualization**
|
||||
|
||||
### **Objective**
|
||||
|
||||
Build the node editor UI and the AI-powered "Find related" feature. This commit will:
|
||||
Implement the 3D "Thought Galaxy" visualization using React Three Fiber (R3F). This commit addresses **Risk 3 (UMAP Projection)** by using the "Calculate My Graph" button strategy for the hackathon.
|
||||
|
||||
1. Create the editor page (/editor/\[id\]) that is pre-filled by the chat (Commit 07\) or loaded from the DB.
|
||||
2. Implement the "Publish" button, which calls the /api/nodes route (from Commit 06).
|
||||
3. Implement the "Find related" button, which calls a *new* /api/suggest-links route.
|
||||
4. Implement the /api/suggest-links route, which generates an embedding for the current draft and uses SurrealDB's vector search to find similar nodes.15
|
||||
1. Create an API route /api/calculate-graph that:
|
||||
* Fetches all the user's node embeddings from SurrealDB.
|
||||
* Uses umap-js to run dimensionality reduction from 1536-D down to 3-D.26
|
||||
* Updates the coords\_3d field for each node in SurrealDB.
|
||||
2. Create a client-side R3F component (/app/galaxy) that:
|
||||
* Fetches all nodes *with* coords\_3d coordinates.
|
||||
* Renders each node as a \<mesh\>.
|
||||
* Renders links as \<Line\>.
|
||||
* Uses \<CameraControls\> for smooth onClick interaction.
|
||||
|
||||
### **Implementation Specification**
|
||||
|
||||
**1\. Create Editor Page (app/editor/\[id\]/page.tsx)**
|
||||
**1\. Create Graph Calculation API (app/api/calculate-graph/route.ts)**
|
||||
|
||||
Create a file at /app/editor/\[id\]/page.tsx:
|
||||
|
||||
TypeScript
|
||||
|
||||
'use client';
|
||||
|
||||
import {
|
||||
Container,
|
||||
Title,
|
||||
TextInput,
|
||||
Textarea,
|
||||
Button,
|
||||
Stack,
|
||||
Paper,
|
||||
Text,
|
||||
LoadingOverlay,
|
||||
Group,
|
||||
} from '@mantine/core';
|
||||
import { useForm } from '@mantine/form';
|
||||
import { useSearchParams, useRouter, useParams } from 'next/navigation';
|
||||
import { useEffect, useState } from 'react';
|
||||
|
||||
// Define the shape of a suggested link
|
||||
interface SuggestedNode {
|
||||
id: string;
|
||||
title: string;
|
||||
body: string;
|
||||
score: number;
|
||||
}
|
||||
|
||||
export default function EditorPage() {
|
||||
const router \= useRouter();
|
||||
const params \= useParams();
|
||||
const searchParams \= useSearchParams();
|
||||
|
||||
const \[isPublishing, setIsPublishing\] \= useState(false);
|
||||
const \[isFinding, setIsFinding\] \= useState(false);
|
||||
const \= useState\<SuggestedNode\>();
|
||||
|
||||
const form \= useForm({
|
||||
initialValues: {
|
||||
title: '',
|
||||
body: '',
|
||||
links: as string, // Array of at-uri strings
|
||||
},
|
||||
});
|
||||
|
||||
// Pre-fill form from search params (from AI chat redirect)
|
||||
useEffect(() \=\> {
|
||||
if (params.id \=== 'new') {
|
||||
const title \= searchParams.get('title') |
|
||||
|
||||
| '';
|
||||
const body \= searchParams.get('body') |
|
||||
|
||||
| '';
|
||||
form.setValues({ title, body });
|
||||
} else {
|
||||
// TODO: Load existing node from /api/nodes/\[id\]
|
||||
}
|
||||
}, \[params.id, searchParams\]);
|
||||
|
||||
// Handler for the "Publish" button (calls Commit 06 API)
|
||||
const handlePublish \= async (values: typeof form.values) \=\> {
|
||||
setIsPublishing(true);
|
||||
try {
|
||||
const response \= await fetch('/api/nodes', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify(values),
|
||||
});
|
||||
|
||||
if (\!response.ok) {
|
||||
throw new Error('Failed to publish node');
|
||||
}
|
||||
|
||||
const newNode \= await response.json();
|
||||
// On success, go to the graph
|
||||
router.push('/galaxy');
|
||||
|
||||
} catch (error) {
|
||||
console.error(error);
|
||||
// TODO: Show notification
|
||||
} finally {
|
||||
setIsPublishing(false);
|
||||
}
|
||||
};
|
||||
|
||||
// Handler for the "Find related" button
|
||||
const handleFindRelated \= async () \=\> {
|
||||
setIsFinding(true);
|
||||
setSuggestions();
|
||||
try {
|
||||
const response \= await fetch('/api/suggest-links', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ body: form.values.body }),
|
||||
});
|
||||
|
||||
if (\!response.ok) {
|
||||
throw new Error('Failed to find links');
|
||||
}
|
||||
|
||||
const relatedNodes \= await response.json();
|
||||
setSuggestions(relatedNodes);
|
||||
|
||||
} catch (error) {
|
||||
console.error(error);
|
||||
// TODO: Show notification
|
||||
} finally {
|
||||
setIsFinding(false);
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
\<Container size="md" py="xl"\>
|
||||
\<form onSubmit={form.onSubmit(handlePublish)}\>
|
||||
\<Stack gap="md"\>
|
||||
\<Title order={2}\>
|
||||
{params.id \=== 'new'? 'Create New Node' : 'Edit Node'}
|
||||
\</Title\>
|
||||
|
||||
\<TextInput
|
||||
label="Title"
|
||||
placeholder="Your node title"
|
||||
required
|
||||
{...form.getInputProps('title')}
|
||||
/\>
|
||||
|
||||
\<Textarea
|
||||
label="Body"
|
||||
placeholder="Your node content..."
|
||||
required
|
||||
minRows={10}
|
||||
autosize
|
||||
{...form.getInputProps('body')}
|
||||
/\>
|
||||
|
||||
\<Group\>
|
||||
\<Button
|
||||
type="button"
|
||||
variant="outline"
|
||||
onClick={handleFindRelated}
|
||||
loading={isFinding}
|
||||
\>
|
||||
Find Related
|
||||
\</Button\>
|
||||
\<Button type="submit" loading={isPublishing}\>
|
||||
Publish Node
|
||||
\</Button\>
|
||||
\</Group\>
|
||||
|
||||
{/\* Related Links Section \*/}
|
||||
\<Stack\>
|
||||
{isFinding && \<LoadingOverlay visible /\>}
|
||||
{suggestions.length \> 0 && \<Title order={4}\>Suggested Links\</Title\>}
|
||||
|
||||
{suggestions.map((node) \=\> (
|
||||
\<Paper key={node.id} withBorder p="sm"\>
|
||||
\<Text fw={700}\>{node.title}\</Text\>
|
||||
\<Text size="sm" lineClamp={2}\>{node.body}\</Text\>
|
||||
\<Text size="xs" c="dimmed"\>Similarity: {(node.score \* 100).toFixed(0)}%\</Text\>
|
||||
\</Paper\>
|
||||
))}
|
||||
|
||||
{\!isFinding && suggestions.length \=== 0 && (
|
||||
\<Text size="sm" c="dimmed"\>
|
||||
{/\* Placeholder text \*/}
|
||||
\</Text\>
|
||||
)}
|
||||
\</Stack\>
|
||||
|
||||
\</Stack\>
|
||||
\</form\>
|
||||
\</Container\>
|
||||
);
|
||||
}
|
||||
|
||||
**2\. Create Link Suggestion API (app/api/suggest-links/route.ts)**
|
||||
|
||||
Create a file at /app/api/suggest-links/route.ts:
|
||||
Create a file at /app/api/calculate-graph/route.ts:
|
||||
|
||||
TypeScript
|
||||
|
||||
import { NextRequest, NextResponse } from 'next/server';
|
||||
import { cookies } from 'next/headers';
|
||||
import { connectToDB } from '@/lib/db';
|
||||
import { generateEmbedding } from '@/lib/ai';
|
||||
import { UMAP } from 'umap-js';
|
||||
|
||||
export async function POST(request: NextRequest) {
|
||||
const surrealJwt \= cookies().get('ponderants-auth')?.value;
|
||||
@@ -210,137 +35,329 @@ export async function POST(request: NextRequest) {
|
||||
return NextResponse.json({ error: 'Not authenticated' }, { status: 401 });
|
||||
}
|
||||
|
||||
const { body } \= await request.json();
|
||||
if (\!body) {
|
||||
return NextResponse.json({ error: 'Body text is required' }, { status: 400 });
|
||||
}
|
||||
|
||||
try {
|
||||
// 1\. Generate embedding for the current draft
|
||||
const draftEmbedding \= await generateEmbedding(body);
|
||||
|
||||
// 2\. Connect to DB (as the user)
|
||||
const db \= await connectToDB(surrealJwt);
|
||||
|
||||
// 3\. Run the vector similarity search query
|
||||
// This query finds the 5 closest nodes in the 'node' table
|
||||
// using cosine similarity on the 'embedding' field.
|
||||
// It only searches nodes WHERE user\_did \= $token.did,
|
||||
// which is enforced by the table's PERMISSIONS.
|
||||
const query \= \`
|
||||
SELECT
|
||||
id,
|
||||
title,
|
||||
body,
|
||||
atp\_uri,
|
||||
vector::similarity::cosine(embedding, $draft\_embedding) AS score
|
||||
FROM node
|
||||
ORDER BY score DESC
|
||||
LIMIT 5;
|
||||
\`;
|
||||
// 1\. Fetch all nodes that have an embedding but no coords
|
||||
const query \= \`SELECT id, embedding FROM node WHERE embedding\!= NONE AND coords\_3d \= NONE\`;
|
||||
const results \= await db.query(query);
|
||||
|
||||
const results \= await db.query(query, {
|
||||
draft\_embedding: draftEmbedding,
|
||||
const nodes \= results.result as { id: string; embedding: number };
|
||||
|
||||
if (nodes.length \< 3) {
|
||||
// UMAP needs a few points to work well
|
||||
return NextResponse.json({ message: 'Not enough nodes to map.' });
|
||||
}
|
||||
|
||||
// 2\. Prepare data for UMAP
|
||||
const embeddings \= nodes.map(n \=\> n.embedding);
|
||||
|
||||
// 3\. Run UMAP to reduce 1536-D to 3-D \[26\]
|
||||
const umap \= new UMAP({
|
||||
nComponents: 3,
|
||||
nNeighbors: Math.min(15, nodes.length \- 1), // nNeighbors must be \< sample size
|
||||
minDist: 0.1,
|
||||
spread: 1.0,
|
||||
});
|
||||
|
||||
// The query returns an array of result sets. We want the first one.
|
||||
return NextResponse.json(results.result);
|
||||
const coords\_3d\_array \= umap.fit(embeddings);
|
||||
|
||||
// 4\. Update nodes in SurrealDB with their new 3D coords
|
||||
// This is run in a transaction for speed.
|
||||
let transaction \= 'BEGIN TRANSACTION;';
|
||||
|
||||
nodes.forEach((node, index) \=\> {
|
||||
const coords \= coords\_3d\_array\[index\];
|
||||
transaction \+= \`UPDATE ${node.id} SET coords\_3d \= \[${coords}, ${coords}, ${coords}\];\`;
|
||||
});
|
||||
|
||||
transaction \+= 'COMMIT TRANSACTION;';
|
||||
|
||||
await db.query(transaction);
|
||||
|
||||
return NextResponse.json({ success: true, nodes\_mapped: nodes.length });
|
||||
|
||||
} catch (error) {
|
||||
console.error('Link suggestion error:', error);
|
||||
console.error('Graph calculation error:', error);
|
||||
return NextResponse.json(
|
||||
{ error: 'Failed to suggest links' },
|
||||
{ error: 'Failed to calculate graph' },
|
||||
{ status: 500 }
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
**2\. Create Galaxy Page (app/galaxy/page.tsx)**
|
||||
|
||||
Create a file at /app/galaxy/page.tsx:
|
||||
|
||||
TypeScript
|
||||
|
||||
'use client';
|
||||
|
||||
import { Button, Box } from '@mantine/core';
|
||||
import { Suspense, useState } from 'react';
|
||||
import { ThoughtGalaxy } from '@/components/ThoughtGalaxy';
|
||||
|
||||
export default function GalaxyPage() {
|
||||
const \[isCalculating, setIsCalculating\] \= useState(false);
|
||||
// This key forces a re-render of the galaxy component
|
||||
const \[galaxyKey, setGalaxyKey\] \= useState(Date.now());
|
||||
|
||||
const handleCalculateGraph \= async () \=\> {
|
||||
setIsCalculating(true);
|
||||
try {
|
||||
await fetch('/api/calculate-graph', { method: 'POST' });
|
||||
// Refresh the galaxy component by changing its key
|
||||
setGalaxyKey(Date.now());
|
||||
} catch (error) {
|
||||
console.error(error);
|
||||
// TODO: Show notification
|
||||
} finally {
|
||||
setIsCalculating(false);
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
\<Box style={{ height: '100vh', width: '100vw', position: 'relative' }}\>
|
||||
\<Button
|
||||
onClick={handleCalculateGraph}
|
||||
loading={isCalculating}
|
||||
style={{ position: 'absolute', top: 20, left: 20, zIndex: 10 }}
|
||||
\>
|
||||
Calculate My Graph
|
||||
\</Button\>
|
||||
|
||||
{/\* R3F Canvas for the 3D visualization \*/}
|
||||
\<Suspense fallback={\<Box\>Loading 3D Scene...\</Box\>}\>
|
||||
\<ThoughtGalaxy key={galaxyKey} /\>
|
||||
\</SuspE\>
|
||||
\</Box\>
|
||||
);
|
||||
}
|
||||
|
||||
**3\. Create 3D Component (components/ThoughtGalaxy.tsx)**
|
||||
|
||||
Create a file at /components/ThoughtGalaxy.tsx:
|
||||
|
||||
TypeScript
|
||||
|
||||
'use client';
|
||||
|
||||
import { Canvas, useLoader } from '@react-three/fiber';
|
||||
import {
|
||||
CameraControls,
|
||||
Line,
|
||||
Text,
|
||||
useCursor,
|
||||
} from '@react-three/drei';
|
||||
import { Suspense, useEffect, useRef, useState } from 'react';
|
||||
import \* as THREE from 'three';
|
||||
import { Surreal } from 'surrealdb.js';
|
||||
|
||||
// Define the shape of nodes and links from DB
|
||||
interface NodeData {
|
||||
id: string;
|
||||
title: string;
|
||||
coords\_3d: \[number, number, number\];
|
||||
}
|
||||
interface LinkData {
|
||||
in: string; // from node id
|
||||
out: string; // to node id
|
||||
}
|
||||
|
||||
// 1\. The 3D Node Component
|
||||
function Node({ node, onNodeClick }) {
|
||||
const \[hovered, setHovered\] \= useState(false);
|
||||
const \[clicked, setClicked\] \= useState(false);
|
||||
useCursor(hovered);
|
||||
|
||||
return (
|
||||
\<mesh
|
||||
position={node.coords\_3d}
|
||||
onClick={(e) \=\> {
|
||||
e.stopPropagation();
|
||||
onNodeClick(node);
|
||||
setClicked(\!clicked);
|
||||
}}
|
||||
onPointerOver={(e) \=\> {
|
||||
e.stopPropagation();
|
||||
setHovered(true);
|
||||
}}
|
||||
onPointerOut={() \=\> setHovered(false)}
|
||||
\>
|
||||
\<sphereGeometry args={\[0.1, 32, 32\]} /\>
|
||||
\<meshStandardMaterial
|
||||
color={hovered? '\#90c0ff' : '\#e9ecef'}
|
||||
emissive={hovered? '\#90c0ff' : '\#e9ecef'}
|
||||
emissiveIntensity={hovered? 0.5 : 0.1}
|
||||
/\>
|
||||
{/\* Show title on hover or click \*/}
|
||||
{(hovered |
|
||||
|
||||
| clicked) && (
|
||||
\<Text
|
||||
position={\[0, 0.2, 0\]}
|
||||
fontSize={0.1}
|
||||
color="white"
|
||||
anchorX="center"
|
||||
anchorY="middle"
|
||||
\>
|
||||
{node.title}
|
||||
\</Text\>
|
||||
)}
|
||||
\</mesh\>
|
||||
);
|
||||
}
|
||||
|
||||
// 2\. The Main Scene Component
|
||||
export function ThoughtGalaxy() {
|
||||
const \[nodes, setNodes\] \= useState\<NodeData\>();
|
||||
const \[links, setLinks\] \= useState\<LinkData\>();
|
||||
const cameraControlsRef \= useRef\<CameraControls\>(null);
|
||||
|
||||
// Fetch data from SurrealDB on mount
|
||||
useEffect(() \=\> {
|
||||
async function fetchData() {
|
||||
// Client-side connection
|
||||
const db \= new Surreal();
|
||||
await db.connect(process.env.NEXT\_PUBLIC\_SURREALDB\_WSS\_URL\!);
|
||||
|
||||
// Get the token from the cookie (this is a hack,
|
||||
// proper way is to use an API route)
|
||||
const token \= document.cookie
|
||||
.split('; ')
|
||||
.find(row \=\> row.startsWith('ponderants-auth='))
|
||||
?.split('=');
|
||||
|
||||
if (\!token) return;
|
||||
|
||||
await db.authenticate(token);
|
||||
|
||||
// Fetch nodes that have coordinates
|
||||
const nodeResults \= await db.query(
|
||||
'SELECT id, title, coords\_3d FROM node WHERE coords\_3d\!= NONE'
|
||||
);
|
||||
setNodes((nodeResults.result as NodeData) ||);
|
||||
|
||||
// Fetch links
|
||||
const linkResults \= await db.query('SELECT in, out FROM links\_to');
|
||||
setLinks((linkResults.result as LinkData) ||);
|
||||
}
|
||||
fetchData();
|
||||
},);
|
||||
|
||||
// Map links to node positions
|
||||
const linkLines \= links
|
||||
.map((link) \=\> {
|
||||
const startNode \= nodes.find((n) \=\> n.id \=== link.in);
|
||||
const endNode \= nodes.find((n) \=\> n.id \=== link.out);
|
||||
if (startNode && endNode) {
|
||||
return {
|
||||
start: startNode.coords\_3d,
|
||||
end: endNode.coords\_3d,
|
||||
};
|
||||
}
|
||||
return null;
|
||||
})
|
||||
.filter(Boolean) as { start: \[number, number, number\]; end: \[number, number, number\] };
|
||||
|
||||
// Camera animation
|
||||
const handleNodeClick \= (node: NodeData) \=\> {
|
||||
cameraControlsRef.current?.smoothTime \= 0.8;
|
||||
cameraControlsRef.current?.setLookAt(
|
||||
node.coords\_3d \+ 1,
|
||||
node.coords\_3d \+ 1,
|
||||
node.coords\_3d \+ 1,
|
||||
...node.coords\_3d,
|
||||
true // Enable smooth transition
|
||||
);
|
||||
};
|
||||
|
||||
return (
|
||||
\<Canvas camera={{ position: , fov: 60 }}\>
|
||||
\<ambientLight intensity={0.5} /\>
|
||||
\<pointLight position={} intensity={1} /\>
|
||||
\<CameraControls ref={cameraControlsRef} /\>
|
||||
|
||||
\<Suspense fallback={null}\>
|
||||
\<group\>
|
||||
{/\* Render all nodes \*/}
|
||||
{nodes.map((node) \=\> (
|
||||
\<Node
|
||||
key={node.id}
|
||||
node={node}
|
||||
onNodeClick={handleNodeClick}
|
||||
/\>
|
||||
))}
|
||||
|
||||
{/\* Render all links \*/}
|
||||
{linkLines.map((line, i) \=\> (
|
||||
\<Line
|
||||
key={i}
|
||||
points={\[line.start, line.end\]}
|
||||
color="\#495057" // gray
|
||||
lineWidth={1}
|
||||
/\>
|
||||
))}
|
||||
\</group\>
|
||||
\</Suspense\>
|
||||
\</Canvas\>
|
||||
);
|
||||
}
|
||||
|
||||
### **Test Specification**
|
||||
|
||||
**1\. Create Test File (tests/magnitude/10-linking.mag.ts)**
|
||||
**1\. Create Test File (tests/magnitude/11-viz.mag.ts)**
|
||||
|
||||
Create a file at /tests/magnitude/10-linking.mag.ts:
|
||||
Create a file at /tests/magnitude/11-viz.mag.ts:
|
||||
|
||||
TypeScript
|
||||
|
||||
import { test } from 'magnitude-test';
|
||||
|
||||
// Helper function to seed the database for this test
|
||||
// Helper function to seed the database
|
||||
async function seedDatabase(agent) {
|
||||
// This would use a custom magnitude.run command or API
|
||||
// to pre-populate the SurrealDB instance with mock nodes.
|
||||
await agent.act('Seed the database with 3 nodes: "Node A", "Node B", "Node C"');
|
||||
// "Node A" is about "dogs and cats"
|
||||
// "Node B" is about "vector databases"
|
||||
// "Node C" is about "ATproto"
|
||||
await agent.act('Seed the database with 5 nodes (A, B, C, D, E) that have embeddings but NO coordinates');
|
||||
}
|
||||
|
||||
test('\[Happy Path\] User can find related links for a draft', async (agent) \=\> {
|
||||
test('\[Happy Path\] User can calculate and view 3D graph', async (agent) \=\> {
|
||||
// Setup: Seed the DB
|
||||
await seedDatabase(agent);
|
||||
|
||||
// Act: Navigate to the editor
|
||||
await agent.act('Navigate to /editor/new');
|
||||
// Act: Go to galaxy page
|
||||
await agent.act('Navigate to /galaxy');
|
||||
|
||||
// Act: Fill out the form with a related idea
|
||||
await agent.act(
|
||||
'Enter "My New Post" into the "Title" input'
|
||||
);
|
||||
await agent.act(
|
||||
'Enter "This idea is about vectors and databases, and how they work." into the "Body" textarea'
|
||||
);
|
||||
// Check: Canvas is empty (no nodes have coords yet)
|
||||
await agent.check('The 3D canvas is visible');
|
||||
await agent.check('The 3D canvas contains 0 node meshes');
|
||||
|
||||
// Act: Click the find related button
|
||||
// (Mock the /api/suggest-links route to return "Node B")
|
||||
await agent.act('Click the "Find Related" button');
|
||||
// Act: Click the calculate button
|
||||
// (Mock the /api/calculate-graph route to return success
|
||||
// and trigger the component re-render)
|
||||
await agent.act('Click the "Calculate My Graph" button');
|
||||
|
||||
// Check: The related node appears in the suggestions
|
||||
await agent.check('A list of suggested links appears');
|
||||
await agent.check('The suggested node "Node B" is visible in the list');
|
||||
await agent.check('The suggested node "Node A" is NOT visible in the list');
|
||||
// Check: Loading state appears
|
||||
await agent.check('The "Calculate My Graph" button shows a loading spinner');
|
||||
|
||||
// (After mock API returns and component re-fetches)
|
||||
// Check: The canvas now has nodes
|
||||
await agent.check('The 3D canvas now contains 5 node meshes');
|
||||
});
|
||||
|
||||
test('\[Unhappy Path\] User sees empty state when no links found', async (agent) \=\> {
|
||||
// Setup: Seed the DB
|
||||
test('\[Interaction\] User can click on a node to focus', async (agent) \=\> {
|
||||
// Setup: Seed the DB and pre-calculate the graph
|
||||
await seedDatabase(agent);
|
||||
await agent.act('Navigate to /galaxy');
|
||||
await agent.act('Click the "Calculate My Graph" button');
|
||||
await agent.check('The 3D canvas now contains 5 node meshes');
|
||||
|
||||
// Act: Navigate to the editor
|
||||
await agent.act('Navigate to /editor/new');
|
||||
// Act: Click on a node
|
||||
// (Magnitude can target R3F meshes by their properties)
|
||||
await agent.act('Click on the 3D node mesh corresponding to "Node A"');
|
||||
|
||||
// Act: Fill out the form with an unrelated idea
|
||||
await agent.act(
|
||||
'Enter "Zebras" into the "Title" input'
|
||||
);
|
||||
await agent.act(
|
||||
'Enter "Zebras are striped equines." into the "Body" textarea'
|
||||
);
|
||||
|
||||
// Act: Click the find related button
|
||||
// (Mock the /api/suggest-links route to return an empty array)
|
||||
await agent.act('Click the "Find Related" button');
|
||||
|
||||
// Check: An empty state is shown
|
||||
await agent.check('The text "No related nodes found" is visible');
|
||||
});
|
||||
|
||||
test('\[Happy Path\] User can publish a new node', async (agent) \=\> {
|
||||
// Act: Navigate to the editor
|
||||
await agent.act('Navigate to /editor/new');
|
||||
|
||||
// Act: Fill out the form
|
||||
await agent.act(
|
||||
'Enter "My First Published Node" into the "Title" input'
|
||||
);
|
||||
await agent.act(
|
||||
'Enter "This is the body of my first node." into the "Body" textarea'
|
||||
);
|
||||
|
||||
// Act: Click Publish
|
||||
// (Mock the /api/nodes route (Commit 06\) to return success)
|
||||
await agent.act('Click the "Publish Node" button');
|
||||
|
||||
// Check: User is redirected to the galaxy
|
||||
await agent.check(
|
||||
'The browser URL is now "http://localhost:3000/galaxy"'
|
||||
);
|
||||
// Check: Camera moves
|
||||
// (This is hard to check directly, but we can check
|
||||
// for the side-effect: the text label appearing)
|
||||
await agent.check('The camera animates and moves closer to the node');
|
||||
await agent.check('A 3D text label "Node A" is visible');
|
||||
});
|
||||
|
||||
144
docs/voice-mode-implementation-plan.md
Normal file
@@ -0,0 +1,144 @@
|
||||
# Voice Mode Implementation Plan
|
||||
|
||||
## Phase 1: Clean State Machine
|
||||
|
||||
### Step 1: Rewrite state machine definition
|
||||
- Remove all unnecessary complexity
|
||||
- Clear state hierarchy
|
||||
- Simple event handlers
|
||||
- Proper tags on all states
|
||||
|
||||
### Step 2: Add test buttons to UI
|
||||
- Button: "Skip to Listening" - sends START_LISTENING
|
||||
- Button: "Simulate User Speech" - sends USER_STARTED_SPEAKING
|
||||
- Button: "Simulate Silence" - sends SILENCE_TIMEOUT
|
||||
- Button: "Simulate AI Response" - sends AI_RESPONSE_READY with test data
|
||||
- Button: "Skip Audio" - sends SKIP_AUDIO (already exists)
|
||||
- Display: Current state value and tags
|
||||
|
||||
## Phase 2: Fix Processing Logic
|
||||
|
||||
### Problem Analysis
|
||||
Current issue: The processing effect is too complex and uses refs incorrectly.
|
||||
|
||||
### Solution
|
||||
**Simple rule**: In processing state, check messages array:
|
||||
1. If last message is NOT user with our transcript → submit
|
||||
2. If last message IS user with our transcript AND second-to-last is assistant → play that assistant message
|
||||
3. Otherwise → wait
|
||||
|
||||
**Implementation**:
|
||||
```typescript
|
||||
useEffect(() => {
|
||||
if (!state.hasTag('processing')) return;
|
||||
if (status !== 'ready') return;
|
||||
|
||||
const transcript = state.context.transcript;
|
||||
if (!transcript) return;
|
||||
|
||||
// Check last 2 messages
|
||||
const lastMsg = messages[messages.length - 1];
|
||||
const secondLastMsg = messages[messages.length - 2];
|
||||
|
||||
// Case 1: Need to submit user message
|
||||
if (!lastMsg || lastMsg.role !== 'user' || getText(lastMsg) !== transcript) {
|
||||
submitUserInput();
|
||||
return;
|
||||
}
|
||||
|
||||
// Case 2: User message submitted, check for AI response
|
||||
if (secondLastMsg && secondLastMsg.role === 'assistant') {
|
||||
const aiMsg = secondLastMsg;
|
||||
|
||||
// Only play if we haven't played this exact message in this session
|
||||
if (state.context.lastSpokenMessageId !== aiMsg.id) {
|
||||
const text = getText(aiMsg);
|
||||
send({ type: 'AI_RESPONSE_READY', messageId: aiMsg.id, text });
|
||||
playAudio(text, aiMsg.id);
|
||||
}
|
||||
}
|
||||
// Otherwise, still waiting for AI response
|
||||
}, [messages, state, status]);
|
||||
```
|
||||
|
||||
No refs needed! Just check the messages array directly.
|
||||
|
||||
## Phase 3: Clean Audio Management
|
||||
|
||||
### Step 1: Simplify audio cancellation
|
||||
- Keep shouldCancelAudioRef
|
||||
- Call stopAllAudio() when leaving canSkipAudio states
|
||||
- playAudio() checks cancel flag at each await
|
||||
|
||||
### Step 2: Effect cleanup
|
||||
- Remove submittingTranscriptRef completely
|
||||
- Remove the "reset ref when leaving processing" effect
|
||||
- Rely only on messages array state
|
||||
|
||||
## Phase 4: Testing with Playwright
|
||||
|
||||
### Test Script
|
||||
```typescript
|
||||
test('Voice mode conversation flow', async (agent) => {
|
||||
await agent.open('http://localhost:3000/chat');
|
||||
|
||||
// Login first
|
||||
await agent.act('Log in with Bluesky');
|
||||
|
||||
// Start voice mode
|
||||
await agent.act('Click "Start Voice Conversation"');
|
||||
await agent.check('Button shows "Generating speech..." or "Listening..."');
|
||||
|
||||
// Skip initial greeting if playing
|
||||
const skipVisible = await agent.check('Skip button is visible', { optional: true });
|
||||
if (skipVisible) {
|
||||
await agent.act('Click Skip button');
|
||||
}
|
||||
await agent.check('Button shows "Listening... Start speaking"');
|
||||
|
||||
// Simulate user speech
|
||||
await agent.act('Click "Simulate User Speech" test button');
|
||||
await agent.check('Button shows "Speaking..."');
|
||||
|
||||
await agent.act('Click "Simulate Silence" test button');
|
||||
await agent.check('Button shows "Processing..."');
|
||||
|
||||
// Wait for AI response
|
||||
await agent.wait(5000);
|
||||
await agent.check('AI message appears in chat');
|
||||
await agent.check('Button shows "Generating speech..." or "AI is speaking..."');
|
||||
|
||||
// Skip AI audio
|
||||
await agent.act('Click Skip button');
|
||||
await agent.check('Button shows "Listening... Start speaking"');
|
||||
|
||||
// Second exchange
|
||||
await agent.act('Click "Simulate User Speech" test button');
|
||||
await agent.act('Click "Simulate Silence" test button');
|
||||
|
||||
// Let AI audio play completely this time
|
||||
await agent.wait(10000);
|
||||
await agent.check('Button shows "Listening... Start speaking"');
|
||||
});
|
||||
```
|
||||
|
||||
## Phase 5: Validation
|
||||
|
||||
### Checklist
|
||||
- [ ] State machine is serializable (can be visualized in Stately)
|
||||
- [ ] No refs used in processing logic
|
||||
- [ ] Latest message only plays once per session
|
||||
- [ ] Skip works instantly in both aiGenerating and aiSpeaking
|
||||
- [ ] Re-entering voice mode plays most recent AI message (if not already spoken)
|
||||
- [ ] All test cases from PRD pass
|
||||
- [ ] Playwright test passes
|
||||
|
||||
## Implementation Order
|
||||
|
||||
1. Add test buttons to UI (for manual testing)
|
||||
2. Rewrite processing effect with simple messages array logic
|
||||
3. Remove submittingTranscriptRef completely
|
||||
4. Test manually with test buttons
|
||||
5. Write Playwright test
|
||||
6. Run and validate Playwright test
|
||||
7. Clean up any remaining issues
|
||||
149
docs/voice-mode-prd.md
Normal file
@@ -0,0 +1,149 @@
|
||||
# Voice Mode PRD
|
||||
|
||||
## User Flows
|
||||
|
||||
### Flow 1: Starting Voice Conversation (No Previous Messages)
|
||||
1. User clicks "Start Voice Conversation" button
|
||||
2. System enters listening mode
|
||||
3. Button shows "Listening... Start speaking"
|
||||
4. Microphone indicator appears
|
||||
|
||||
### Flow 2: Starting Voice Conversation (With Previous AI Message)
|
||||
1. User clicks "Start Voice Conversation" button
|
||||
2. System checks for most recent AI message
|
||||
3. If found and not already spoken in this session:
|
||||
- System generates and plays TTS for that message
|
||||
- Button shows "Generating speech..." then "AI is speaking..."
|
||||
- Skip button appears
|
||||
4. After audio finishes OR user clicks skip:
|
||||
- System enters listening mode
|
||||
|
||||
### Flow 3: User Speaks
|
||||
1. User speaks (while in listening state)
|
||||
2. System detects speech, button shows "Speaking..."
|
||||
3. System receives interim transcripts (updates display)
|
||||
4. System receives finalized phrases (appends to transcript)
|
||||
5. After each finalized phrase, 3-second silence timer starts
|
||||
6. Button shows countdown: "Speaking... (auto-submits in 2.1s)"
|
||||
7. If user continues speaking, timer resets
|
||||
|
||||
### Flow 4: Submit and AI Response
|
||||
1. After 3 seconds of silence, transcript is submitted
|
||||
2. Button shows "Processing..."
|
||||
3. User message appears in chat
|
||||
4. AI streams response (appears in chat)
|
||||
5. When streaming completes:
|
||||
- System generates TTS for AI response
|
||||
- Button shows "Generating speech..."
|
||||
- When TTS ready, plays audio
|
||||
- Button shows "AI is speaking..."
|
||||
- Skip button appears
|
||||
6. After audio finishes OR user clicks skip:
|
||||
- System returns to listening mode
|
||||
|
||||
### Flow 5: Skipping AI Audio
|
||||
1. While AI is generating or speaking (button shows "Generating speech..." or "AI is speaking...")
|
||||
2. Skip button is visible
|
||||
3. User clicks Skip
|
||||
4. Audio stops immediately
|
||||
5. System enters listening mode
|
||||
6. Button shows "Listening... Start speaking"
|
||||
|
||||
### Flow 6: Exiting Voice Mode
|
||||
1. User clicks voice button (at any time)
|
||||
2. System stops all audio
|
||||
3. System closes microphone connection
|
||||
4. Returns to text mode
|
||||
5. Button shows "Start Voice Conversation"
|
||||
|
||||
## Critical Rules
|
||||
|
||||
1. **Latest Message Only**: AI ONLY plays the most recent assistant message. Never re-play old messages.
|
||||
2. **Skip Always Works**: Skip button must IMMEDIATELY stop audio and return to listening.
|
||||
3. **One Message Per Turn**: Each user speech -> one submission -> one AI response -> one audio playback.
|
||||
4. **Clean State**: Every state transition should cancel any incompatible ongoing operations.
|
||||
|
||||
## State Machine
|
||||
|
||||
```
|
||||
text
|
||||
├─ TOGGLE_VOICE_MODE → voice.idle
|
||||
|
||||
voice.idle
|
||||
├─ Check for latest AI message not yet spoken
|
||||
│ ├─ If found → Send AI_RESPONSE_READY → voice.aiGenerating
|
||||
│ └─ If not found → Send START_LISTENING → voice.listening
|
||||
└─ TOGGLE_VOICE_MODE → text
|
||||
|
||||
voice.listening
|
||||
├─ USER_STARTED_SPEAKING → voice.userSpeaking
|
||||
├─ TRANSCRIPT_UPDATE → (update context.input for display)
|
||||
└─ TOGGLE_VOICE_MODE → text
|
||||
|
||||
voice.userSpeaking
|
||||
├─ FINALIZED_PHRASE → voice.timingOut (starts 3s timer)
|
||||
├─ TRANSCRIPT_UPDATE → (update context.input for display)
|
||||
└─ TOGGLE_VOICE_MODE → text
|
||||
|
||||
voice.timingOut
|
||||
├─ FINALIZED_PHRASE → voice.timingOut (restart 3s timer)
|
||||
├─ TRANSCRIPT_UPDATE → (update context.input for display)
|
||||
├─ SILENCE_TIMEOUT → voice.processing
|
||||
└─ TOGGLE_VOICE_MODE → text
|
||||
|
||||
voice.processing
|
||||
├─ (Effect: submit if not submitted, wait for AI response)
|
||||
├─ When AI response ready → Send AI_RESPONSE_READY → voice.aiGenerating
|
||||
└─ TOGGLE_VOICE_MODE → text
|
||||
|
||||
voice.aiGenerating
|
||||
├─ TTS_PLAYING → voice.aiSpeaking
|
||||
├─ SKIP_AUDIO → voice.listening
|
||||
└─ TOGGLE_VOICE_MODE → text
|
||||
|
||||
voice.aiSpeaking
|
||||
├─ TTS_FINISHED → voice.listening
|
||||
├─ SKIP_AUDIO → voice.listening
|
||||
└─ TOGGLE_VOICE_MODE → text
|
||||
```
|
||||
|
||||
## Test Cases
|
||||
|
||||
### Test 1: Basic Conversation
|
||||
1. Click "Start Voice Conversation"
|
||||
2. Skip initial greeting
|
||||
3. Say "Hello"
|
||||
4. Wait for AI response
|
||||
5. Let AI audio play completely
|
||||
6. Say "How are you?"
|
||||
7. Skip AI audio
|
||||
8. Say "Goodbye"
|
||||
|
||||
Expected: 3 exchanges, AI only plays latest message each time
|
||||
|
||||
### Test 2: Multiple Skips
|
||||
1. Start voice mode
|
||||
2. Skip greeting immediately
|
||||
3. Say "Test one"
|
||||
4. Skip AI response immediately
|
||||
5. Say "Test two"
|
||||
6. Skip AI response immediately
|
||||
|
||||
Expected: All skips work instantly, no audio bleeding
|
||||
|
||||
### Test 3: Re-entering Voice Mode
|
||||
1. Start voice mode
|
||||
2. Say "Hello"
|
||||
3. Let AI respond
|
||||
4. Exit voice mode (click button again)
|
||||
5. Re-enter voice mode
|
||||
|
||||
Expected: AI reads the most recent message (its last response)
|
||||
|
||||
### Test 4: Long Speech
|
||||
1. Start voice mode
|
||||
2. Skip greeting
|
||||
3. Say a long sentence with multiple pauses < 3 seconds
|
||||
4. Wait for final 3s timeout
|
||||
|
||||
Expected: All speech is captured in one transcript
|
||||
6
history.txt
Normal file
@@ -0,0 +1,6 @@
|
||||
#V2
|
||||
SELECT id, title, embedding IS NOT NONE as has_embedding, coords_3d IS NOT NONE as has_coords FROM node LIMIT 10;
|
||||
SELECT count() as total FROM node GROUP ALL; SELECT count() as with_embeddings FROM node WHERE embedding IS NOT NONE GROUP ALL; SELECT count() as with_coords FROM node WHERE coords_3d IS NOT NONE GROUP ALL;
|
||||
SELECT id, title, user_did FROM node LIMIT 5;
|
||||
SELECT * FROM links_to LIMIT 10;
|
||||
SELECT id, title, coords_3d FROM node LIMIT 5;
|
||||
24
hooks/useAppMachine.ts
Normal file
@@ -0,0 +1,24 @@
|
||||
/**
|
||||
* useAppMachine Hook
|
||||
*
|
||||
* Provides access to the app-level state machine from any component.
|
||||
* Must be used within an AppStateMachineProvider.
|
||||
*/
|
||||
|
||||
import { createContext, useContext } from 'react';
|
||||
import type { ActorRefFrom } from 'xstate';
|
||||
import { appMachine } from '@/lib/app-machine';
|
||||
|
||||
type AppMachineActor = ActorRefFrom<typeof appMachine>;
|
||||
|
||||
export const AppMachineContext = createContext<AppMachineActor | null>(null);
|
||||
|
||||
export function useAppMachine() {
|
||||
const actor = useContext(AppMachineContext);
|
||||
|
||||
if (!actor) {
|
||||
throw new Error('useAppMachine must be used within AppStateMachineProvider');
|
||||
}
|
||||
|
||||
return actor;
|
||||
}
|
||||
280
hooks/useVoiceMode.ts
Normal file
@@ -0,0 +1,280 @@
|
||||
/**
|
||||
* Voice Mode Hook
|
||||
*
|
||||
* Clean React integration with the voice state machine.
|
||||
* Each effect responds to a state by performing an action and sending an event back.
|
||||
*/
|
||||
|
||||
import { useEffect, useRef } from 'react';
|
||||
import { useMachine } from '@xstate/react';
|
||||
import { voiceMachine } from '@/lib/voice-machine';
|
||||
|
||||
interface UseVoiceModeProps {
|
||||
messages: any[];
|
||||
status: 'ready' | 'submitted' | 'streaming' | 'error';
|
||||
onSubmit: (text: string) => void;
|
||||
}
|
||||
|
||||
export function useVoiceMode({ messages, status, onSubmit }: UseVoiceModeProps) {
|
||||
const [state, send] = useMachine(voiceMachine);
|
||||
|
||||
// Refs for side effects
|
||||
const audioRef = useRef<HTMLAudioElement | null>(null);
|
||||
const mediaRecorderRef = useRef<MediaRecorder | null>(null);
|
||||
const socketRef = useRef<WebSocket | null>(null);
|
||||
|
||||
// Helper: Get text from message
|
||||
const getMessageText = (msg: any): string => {
|
||||
if ('parts' in msg && Array.isArray(msg.parts)) {
|
||||
const textPart = msg.parts.find((p: any) => p.type === 'text');
|
||||
return textPart?.text || '';
|
||||
}
|
||||
return msg.content || '';
|
||||
};
|
||||
|
||||
// STATE: checkingForGreeting
|
||||
// Action: Check if there's an unspoken AI message, send event
|
||||
useEffect(() => {
|
||||
if (!state.matches('checkingForGreeting')) return;
|
||||
|
||||
const assistantMessages = messages.filter((m) => m.role === 'assistant');
|
||||
if (assistantMessages.length === 0) {
|
||||
send({ type: 'START_LISTENING' });
|
||||
return;
|
||||
}
|
||||
|
||||
const latest = assistantMessages[assistantMessages.length - 1];
|
||||
if (state.context.lastSpokenMessageId === latest.id) {
|
||||
send({ type: 'START_LISTENING' });
|
||||
return;
|
||||
}
|
||||
|
||||
const text = getMessageText(latest);
|
||||
if (text) {
|
||||
send({ type: 'AI_RESPONSE_RECEIVED', messageId: latest.id, text });
|
||||
} else {
|
||||
send({ type: 'START_LISTENING' });
|
||||
}
|
||||
}, [state, messages, send]);
|
||||
|
||||
// STATE: listening
|
||||
// Action: Start microphone and WebSocket
|
||||
useEffect(() => {
|
||||
if (!state.matches('listening')) return;
|
||||
|
||||
let cleanup: (() => void) | null = null;
|
||||
|
||||
(async () => {
|
||||
try {
|
||||
// Get Deepgram token
|
||||
const response = await fetch('/api/voice-token', { method: 'POST' });
|
||||
const data = await response.json();
|
||||
if (data.error) throw new Error(data.error);
|
||||
|
||||
// Get microphone
|
||||
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
|
||||
|
||||
// Connect WebSocket with VAD and utterance end detection
|
||||
const socket = new WebSocket(
|
||||
'wss://api.deepgram.com/v1/listen?interim_results=true&punctuate=true&vad_events=true&utterance_end_ms=1000',
|
||||
['token', data.key]
|
||||
);
|
||||
socketRef.current = socket;
|
||||
|
||||
socket.onopen = () => {
|
||||
const mediaRecorder = new MediaRecorder(stream, { mimeType: 'audio/webm' });
|
||||
mediaRecorderRef.current = mediaRecorder;
|
||||
|
||||
mediaRecorder.ondataavailable = (event) => {
|
||||
if (event.data.size > 0 && socket.readyState === WebSocket.OPEN) {
|
||||
socket.send(event.data);
|
||||
}
|
||||
};
|
||||
|
||||
mediaRecorder.start(250);
|
||||
};
|
||||
|
||||
socket.onmessage = (event) => {
|
||||
const data = JSON.parse(event.data);
|
||||
|
||||
// Handle UtteranceEnd - Deepgram detected end of utterance
|
||||
if (data.type === 'UtteranceEnd') {
|
||||
console.log('[Voice] Utterance ended, sending UTTERANCE_END event');
|
||||
send({ type: 'UTTERANCE_END' });
|
||||
return;
|
||||
}
|
||||
|
||||
// Handle transcript events
|
||||
if (!data.channel?.alternatives) return;
|
||||
|
||||
const transcript = data.channel.alternatives[0]?.transcript || '';
|
||||
if (!transcript) return;
|
||||
|
||||
// Detect if user started or resumed speaking based on receiving transcript
|
||||
console.log('[Voice] Transcript received:', transcript);
|
||||
send({ type: 'USER_STARTED_SPEAKING' });
|
||||
|
||||
// Append finalized phrases to the transcript
|
||||
if (data.is_final) {
|
||||
send({ type: 'FINALIZED_PHRASE', phrase: transcript });
|
||||
}
|
||||
};
|
||||
|
||||
cleanup = () => {
|
||||
socket.close();
|
||||
mediaRecorderRef.current?.stop();
|
||||
stream.getTracks().forEach((track) => track.stop());
|
||||
};
|
||||
} catch (error) {
|
||||
console.error('[Voice] Error starting listening:', error);
|
||||
send({ type: 'ERROR', message: String(error) });
|
||||
}
|
||||
})();
|
||||
|
||||
return cleanup || undefined;
|
||||
}, [state, send]);
|
||||
|
||||
// STATE: timingOut is now handled by XState's built-in `after` delay
|
||||
// No useEffect needed - the state machine automatically transitions after 3 seconds
|
||||
|
||||
// STATE: submittingUser
|
||||
// Action: Submit transcript, send event when done
|
||||
useEffect(() => {
|
||||
if (!state.matches('submittingUser')) return;
|
||||
|
||||
const transcript = state.context.transcript.trim();
|
||||
if (!transcript) {
|
||||
send({ type: 'ERROR', message: 'No transcript to submit' });
|
||||
return;
|
||||
}
|
||||
|
||||
// Close WebSocket
|
||||
if (socketRef.current) {
|
||||
socketRef.current.close();
|
||||
socketRef.current = null;
|
||||
}
|
||||
if (mediaRecorderRef.current) {
|
||||
mediaRecorderRef.current.stop();
|
||||
mediaRecorderRef.current = null;
|
||||
}
|
||||
|
||||
// Submit
|
||||
onSubmit(transcript);
|
||||
send({ type: 'USER_MESSAGE_SUBMITTED' });
|
||||
}, [state, send, onSubmit]);
|
||||
|
||||
// STATE: waitingForAI
|
||||
// Action: Poll messages for AI response
|
||||
useEffect(() => {
|
||||
if (!state.matches('waitingForAI')) return;
|
||||
if (status !== 'ready') return;
|
||||
|
||||
const transcript = state.context.transcript.trim();
|
||||
if (!transcript) return;
|
||||
|
||||
// Check if AI has responded
|
||||
const lastMsg = messages[messages.length - 1];
|
||||
const secondLastMsg = messages[messages.length - 2];
|
||||
|
||||
if (
|
||||
lastMsg &&
|
||||
lastMsg.role === 'assistant' &&
|
||||
secondLastMsg &&
|
||||
secondLastMsg.role === 'user' &&
|
||||
getMessageText(secondLastMsg) === transcript
|
||||
) {
|
||||
const text = getMessageText(lastMsg);
|
||||
if (text) {
|
||||
send({ type: 'AI_RESPONSE_RECEIVED', messageId: lastMsg.id, text });
|
||||
}
|
||||
}
|
||||
}, [state, messages, status, send]);
|
||||
|
||||
// STATE: generatingTTS
|
||||
// Action: Generate TTS audio
|
||||
useEffect(() => {
|
||||
if (!state.matches('generatingTTS')) return;
|
||||
|
||||
// Get the AI text from the event that triggered this state
|
||||
const assistantMessages = messages.filter((m) => m.role === 'assistant');
|
||||
const latest = assistantMessages[assistantMessages.length - 1];
|
||||
if (!latest) return;
|
||||
|
||||
const text = getMessageText(latest);
|
||||
if (!text) return;
|
||||
|
||||
(async () => {
|
||||
try {
|
||||
const response = await fetch('/api/tts', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ text }),
|
||||
});
|
||||
|
||||
if (!response.ok) throw new Error('TTS generation failed');
|
||||
|
||||
const audioBlob = await response.blob();
|
||||
const audioUrl = URL.createObjectURL(audioBlob);
|
||||
|
||||
send({ type: 'TTS_GENERATION_COMPLETE', audioUrl });
|
||||
} catch (error) {
|
||||
console.error('[Voice] TTS generation error:', error);
|
||||
send({ type: 'ERROR', message: String(error) });
|
||||
}
|
||||
})();
|
||||
}, [state, messages, send]);
|
||||
|
||||
// STATE: playingTTS
|
||||
// Action: Play audio, send event when finished
|
||||
useEffect(() => {
|
||||
if (!state.matches('playingTTS')) {
|
||||
// Stop audio if we leave this state
|
||||
if (audioRef.current) {
|
||||
audioRef.current.pause();
|
||||
audioRef.current.currentTime = 0;
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
const audioUrl = state.context.audioUrl;
|
||||
if (!audioUrl) {
|
||||
send({ type: 'ERROR', message: 'No audio URL' });
|
||||
return;
|
||||
}
|
||||
|
||||
// Create or reuse audio element
|
||||
if (!audioRef.current) {
|
||||
audioRef.current = new Audio();
|
||||
}
|
||||
|
||||
audioRef.current.src = audioUrl;
|
||||
audioRef.current.onended = () => {
|
||||
URL.revokeObjectURL(audioUrl);
|
||||
send({ type: 'TTS_PLAYBACK_FINISHED' });
|
||||
};
|
||||
|
||||
audioRef.current.onerror = () => {
|
||||
URL.revokeObjectURL(audioUrl);
|
||||
send({ type: 'ERROR', message: 'Audio playback error' });
|
||||
};
|
||||
|
||||
audioRef.current.play().catch((error) => {
|
||||
console.error('[Voice] Audio play error:', error);
|
||||
send({ type: 'ERROR', message: String(error) });
|
||||
});
|
||||
|
||||
return () => {
|
||||
if (audioRef.current) {
|
||||
audioRef.current.pause();
|
||||
audioRef.current.currentTime = 0;
|
||||
}
|
||||
};
|
||||
}, [state, send]);
|
||||
|
||||
return {
|
||||
state,
|
||||
send,
|
||||
transcript: state.context.transcript,
|
||||
error: state.context.error,
|
||||
};
|
||||
}
|
||||
18
lib/ai.ts
@@ -1,17 +1,25 @@
|
||||
import { GoogleGenerativeAI } from '@google/generative-ai';
|
||||
|
||||
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY!);
|
||||
// Validate required environment variables
|
||||
if (!process.env.GOOGLE_GENERATIVE_AI_API_KEY) {
|
||||
throw new Error('GOOGLE_GENERATIVE_AI_API_KEY environment variable is required');
|
||||
}
|
||||
|
||||
if (!process.env.GOOGLE_EMBEDDING_MODEL) {
|
||||
throw new Error('GOOGLE_EMBEDDING_MODEL environment variable is required (e.g., gemini-embedding-001)');
|
||||
}
|
||||
|
||||
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_GENERATIVE_AI_API_KEY);
|
||||
|
||||
const embeddingModel = genAI.getGenerativeModel({
|
||||
model: 'text-embedding-004',
|
||||
model: process.env.GOOGLE_EMBEDDING_MODEL,
|
||||
});
|
||||
|
||||
/**
|
||||
* Generates a vector embedding for a given text using Google's text-embedding-004 model.
|
||||
* The output is a 768-dimension vector (not 1536 as originally specified).
|
||||
* Generates a vector embedding for a given text using the configured Google embedding model.
|
||||
*
|
||||
* @param text - The text to embed
|
||||
* @returns A 768-dimension vector (Array<number>)
|
||||
* @returns A vector embedding (dimension depends on model)
|
||||
*/
|
||||
export async function generateEmbedding(text: string): Promise<number[]> {
|
||||
try {
|
||||
|
||||
170
lib/app-machine.ts
Normal file
@@ -0,0 +1,170 @@
|
||||
/**
|
||||
* App-Level State Machine
|
||||
*
|
||||
* Manages the top-level application state across three main modes:
|
||||
* - convo: Active conversation (voice or text)
|
||||
* - edit: Editing a node
|
||||
* - galaxy: 3D visualization of node graph
|
||||
*
|
||||
* This machine sits above the conversation machine (which contains voice/text modes).
|
||||
* It does NOT duplicate the voice mode logic - that lives in voice-machine.ts.
|
||||
*/
|
||||
|
||||
import { setup, assign } from 'xstate';
|
||||
|
||||
export interface NodeDraft {
|
||||
title: string;
|
||||
content: string;
|
||||
conversationContext?: string; // Last N messages as context
|
||||
}
|
||||
|
||||
export interface Node {
|
||||
id: string;
|
||||
title: string;
|
||||
content: string;
|
||||
atp_uri?: string;
|
||||
}
|
||||
|
||||
interface AppContext {
|
||||
currentNodeId: string | null;
|
||||
pendingNodeDraft: NodeDraft | null;
|
||||
mode: 'mobile' | 'desktop';
|
||||
lastError: string | null;
|
||||
}
|
||||
|
||||
type AppEvent =
|
||||
| { type: 'NAVIGATE_TO_CONVO' }
|
||||
| { type: 'NAVIGATE_TO_EDIT'; nodeId?: string; draft?: NodeDraft }
|
||||
| { type: 'NAVIGATE_TO_GALAXY' }
|
||||
| { type: 'CREATE_NODE_FROM_CONVERSATION'; draft: NodeDraft }
|
||||
| { type: 'PUBLISH_NODE_SUCCESS'; nodeId: string }
|
||||
| { type: 'CANCEL_EDIT' }
|
||||
| { type: 'SET_MODE'; mode: 'mobile' | 'desktop' }
|
||||
| { type: 'ERROR'; message: string };
|
||||
|
||||
export const appMachine = setup({
|
||||
types: {
|
||||
context: {} as AppContext,
|
||||
events: {} as AppEvent,
|
||||
},
|
||||
actions: {
|
||||
setCurrentNode: assign({
|
||||
currentNodeId: ({ event }) =>
|
||||
event.type === 'NAVIGATE_TO_EDIT' ? event.nodeId || null : null,
|
||||
}),
|
||||
setPendingDraft: assign({
|
||||
pendingNodeDraft: ({ event }) => {
|
||||
if (event.type === 'NAVIGATE_TO_EDIT' && event.draft) {
|
||||
console.log('[App Machine] Setting pending draft:', event.draft);
|
||||
return event.draft;
|
||||
}
|
||||
if (event.type === 'CREATE_NODE_FROM_CONVERSATION') {
|
||||
console.log('[App Machine] Creating node from conversation:', event.draft);
|
||||
return event.draft;
|
||||
}
|
||||
return null;
|
||||
},
|
||||
}),
|
||||
clearDraft: assign({
|
||||
pendingNodeDraft: null,
|
||||
currentNodeId: null,
|
||||
}),
|
||||
setPublishedNode: assign({
|
||||
currentNodeId: ({ event }) =>
|
||||
event.type === 'PUBLISH_NODE_SUCCESS' ? event.nodeId : null,
|
||||
pendingNodeDraft: null,
|
||||
}),
|
||||
setMode: assign({
|
||||
mode: ({ event }) => (event.type === 'SET_MODE' ? event.mode : 'desktop'),
|
||||
}),
|
||||
setError: assign({
|
||||
lastError: ({ event }) => (event.type === 'ERROR' ? event.message : null),
|
||||
}),
|
||||
clearError: assign({
|
||||
lastError: null,
|
||||
}),
|
||||
logTransition: ({ context, event }) => {
|
||||
console.log('[App Machine] Event:', event.type);
|
||||
console.log('[App Machine] Context:', {
|
||||
currentNodeId: context.currentNodeId,
|
||||
hasDraft: !!context.pendingNodeDraft,
|
||||
mode: context.mode,
|
||||
});
|
||||
},
|
||||
},
|
||||
}).createMachine({
|
||||
id: 'app',
|
||||
initial: 'convo',
|
||||
context: {
|
||||
currentNodeId: null,
|
||||
pendingNodeDraft: null,
|
||||
mode: 'desktop',
|
||||
lastError: null,
|
||||
},
|
||||
on: {
|
||||
SET_MODE: {
|
||||
actions: ['setMode', 'logTransition'],
|
||||
},
|
||||
ERROR: {
|
||||
actions: ['setError', 'logTransition'],
|
||||
},
|
||||
},
|
||||
states: {
|
||||
convo: {
|
||||
tags: ['conversation'],
|
||||
entry: ['clearError', 'logTransition'],
|
||||
on: {
|
||||
NAVIGATE_TO_EDIT: {
|
||||
target: 'edit',
|
||||
actions: ['setCurrentNode', 'setPendingDraft', 'logTransition'],
|
||||
},
|
||||
CREATE_NODE_FROM_CONVERSATION: {
|
||||
target: 'edit',
|
||||
actions: ['setPendingDraft', 'logTransition'],
|
||||
},
|
||||
NAVIGATE_TO_GALAXY: {
|
||||
target: 'galaxy',
|
||||
actions: ['logTransition'],
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
edit: {
|
||||
tags: ['editing'],
|
||||
entry: ['clearError', 'logTransition'],
|
||||
on: {
|
||||
NAVIGATE_TO_CONVO: {
|
||||
target: 'convo',
|
||||
actions: ['logTransition'],
|
||||
},
|
||||
NAVIGATE_TO_GALAXY: {
|
||||
target: 'galaxy',
|
||||
actions: ['logTransition'],
|
||||
},
|
||||
PUBLISH_NODE_SUCCESS: {
|
||||
target: 'galaxy',
|
||||
actions: ['setPublishedNode', 'logTransition'],
|
||||
},
|
||||
CANCEL_EDIT: {
|
||||
target: 'convo',
|
||||
actions: ['clearDraft', 'logTransition'],
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
galaxy: {
|
||||
tags: ['visualization'],
|
||||
entry: ['clearError', 'logTransition'],
|
||||
on: {
|
||||
NAVIGATE_TO_CONVO: {
|
||||
target: 'convo',
|
||||
actions: ['logTransition'],
|
||||
},
|
||||
NAVIGATE_TO_EDIT: {
|
||||
target: 'edit',
|
||||
actions: ['setCurrentNode', 'setPendingDraft', 'logTransition'],
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
@@ -38,9 +38,10 @@ export async function getOAuthClient(): Promise<NodeOAuthClient> {
|
||||
if (isDev) {
|
||||
// Development: Use localhost loopback client
|
||||
// Per ATproto spec, we encode metadata in the client_id query params
|
||||
// Request 'transition:generic' scope for repository write access
|
||||
const clientId = `http://localhost/?${new URLSearchParams({
|
||||
redirect_uri: callbackUrl,
|
||||
scope: 'atproto',
|
||||
scope: 'atproto transition:generic',
|
||||
}).toString()}`;
|
||||
|
||||
console.log('[OAuth] Initializing development client with loopback exception');
|
||||
@@ -50,7 +51,7 @@ export async function getOAuthClient(): Promise<NodeOAuthClient> {
|
||||
clientMetadata: {
|
||||
client_id: clientId,
|
||||
redirect_uris: [callbackUrl],
|
||||
scope: 'atproto',
|
||||
scope: 'atproto transition:generic',
|
||||
grant_types: ['authorization_code', 'refresh_token'],
|
||||
response_types: ['code'],
|
||||
application_type: 'native',
|
||||
|
||||
@@ -38,6 +38,7 @@ async function getDB(): Promise<Surreal> {
|
||||
export function createSessionStore(): NodeSavedSessionStore {
|
||||
return {
|
||||
async set(did: string, sessionData: NodeSavedSession): Promise<void> {
|
||||
console.log('[SessionStore] Setting session for DID:', did);
|
||||
const db = await getDB();
|
||||
|
||||
try {
|
||||
@@ -50,12 +51,14 @@ export function createSessionStore(): NodeSavedSessionStore {
|
||||
|
||||
if (Array.isArray(existing) && existing.length > 0) {
|
||||
// Update existing record
|
||||
console.log('[SessionStore] Updating existing session');
|
||||
await db.merge(recordId, {
|
||||
session_data: sessionData,
|
||||
updated_at: new Date().toISOString(),
|
||||
});
|
||||
} else {
|
||||
// Create new record
|
||||
console.log('[SessionStore] Creating new session');
|
||||
await db.create(recordId, {
|
||||
did,
|
||||
session_data: sessionData,
|
||||
@@ -63,12 +66,17 @@ export function createSessionStore(): NodeSavedSessionStore {
|
||||
updated_at: new Date().toISOString(),
|
||||
});
|
||||
}
|
||||
console.log('[SessionStore] ✓ Session saved successfully');
|
||||
} catch (error) {
|
||||
console.error('[SessionStore] Error setting session:', error);
|
||||
throw error;
|
||||
} finally {
|
||||
await db.close();
|
||||
}
|
||||
},
|
||||
|
||||
async get(did: string): Promise<NodeSavedSession | undefined> {
|
||||
console.log('[SessionStore] Getting session for DID:', did);
|
||||
const db = await getDB();
|
||||
|
||||
try {
|
||||
@@ -77,7 +85,11 @@ export function createSessionStore(): NodeSavedSessionStore {
|
||||
|
||||
// db.select() returns an array when selecting a specific record ID
|
||||
const record = Array.isArray(result) ? result[0] : result;
|
||||
console.log('[SessionStore] Get result:', { found: !!record, hasSessionData: !!record?.session_data });
|
||||
return record?.session_data;
|
||||
} catch (error) {
|
||||
console.error('[SessionStore] Error getting session:', error);
|
||||
throw error;
|
||||
} finally {
|
||||
await db.close();
|
||||
}
|
||||
|
||||
27
lib/db.ts
@@ -1,31 +1,42 @@
|
||||
import Surreal from 'surrealdb';
|
||||
|
||||
/**
|
||||
* Connects to the SurrealDB instance and authenticates with the user's JWT.
|
||||
* This enforces row-level security defined in the schema.
|
||||
* Connects to the SurrealDB instance with root credentials.
|
||||
*
|
||||
* IMPORTANT: This connects as root, so queries MUST filter by user_did
|
||||
* to enforce data isolation. The caller is responsible for providing
|
||||
* the correct user_did from the verified JWT.
|
||||
*
|
||||
* @param token - The user's app-specific (SurrealDB) JWT
|
||||
* @returns The authenticated SurrealDB instance
|
||||
*/
|
||||
export async function connectToDB(token: string): Promise<Surreal> {
|
||||
export async function connectToDB(): Promise<Surreal> {
|
||||
const SURREALDB_URL = process.env.SURREALDB_URL;
|
||||
const SURREALDB_NAMESPACE = process.env.SURREALDB_NS;
|
||||
const SURREALDB_DATABASE = process.env.SURREALDB_DB;
|
||||
const SURREALDB_USER = process.env.SURREALDB_USER;
|
||||
const SURREALDB_PASS = process.env.SURREALDB_PASS;
|
||||
|
||||
if (!SURREALDB_URL || !SURREALDB_NAMESPACE || !SURREALDB_DATABASE) {
|
||||
throw new Error('SurrealDB configuration is missing');
|
||||
}
|
||||
|
||||
if (!SURREALDB_USER || !SURREALDB_PASS) {
|
||||
throw new Error('SurrealDB credentials are missing');
|
||||
}
|
||||
|
||||
// Create a new instance for each request to avoid connection state issues
|
||||
const db = new Surreal();
|
||||
|
||||
// Connect to SurrealDB
|
||||
await db.connect(SURREALDB_URL);
|
||||
|
||||
// Authenticate as the user for this request.
|
||||
// This enforces the row-level security (PERMISSIONS)
|
||||
// defined in the schema for all subsequent queries.
|
||||
await db.authenticate(token);
|
||||
// Sign in with root credentials
|
||||
// NOTE: We use root access because our JWT-based auth is app-level,
|
||||
// not SurrealDB-level. Queries must filter by user_did from the verified JWT.
|
||||
await db.signin({
|
||||
username: SURREALDB_USER,
|
||||
password: SURREALDB_PASS,
|
||||
});
|
||||
|
||||
// Use the correct namespace and database
|
||||
await db.use({
|
||||
|
||||
204
lib/voice-machine.ts
Normal file
@@ -0,0 +1,204 @@
|
||||
/**
|
||||
* Voice Mode State Machine - Clean, Canonical Design
|
||||
*
|
||||
* This machine represents the voice conversation flow.
|
||||
* All logic is in the machine definition, not in React effects.
|
||||
*/
|
||||
|
||||
import { setup, assign, fromPromise } from 'xstate';
|
||||
|
||||
interface VoiceContext {
|
||||
transcript: string;
|
||||
lastSpokenMessageId: string | null;
|
||||
error: string | null;
|
||||
audioUrl: string | null;
|
||||
aiText: string | null;
|
||||
}
|
||||
|
||||
type VoiceEvent =
|
||||
| { type: 'START_VOICE' }
|
||||
| { type: 'STOP_VOICE' }
|
||||
| { type: 'START_LISTENING' }
|
||||
| { type: 'USER_STARTED_SPEAKING' }
|
||||
| { type: 'FINALIZED_PHRASE'; phrase: string }
|
||||
| { type: 'UTTERANCE_END' }
|
||||
| { type: 'SILENCE_TIMEOUT' }
|
||||
| { type: 'USER_MESSAGE_SUBMITTED' }
|
||||
| { type: 'AI_RESPONSE_RECEIVED'; messageId: string; text: string }
|
||||
| { type: 'TTS_GENERATION_COMPLETE'; audioUrl: string }
|
||||
| { type: 'TTS_PLAYBACK_STARTED' }
|
||||
| { type: 'TTS_PLAYBACK_FINISHED' }
|
||||
| { type: 'SKIP_AUDIO' }
|
||||
| { type: 'ERROR'; message: string };
|
||||
|
||||
export const voiceMachine = setup({
|
||||
types: {
|
||||
context: {} as VoiceContext,
|
||||
events: {} as VoiceEvent,
|
||||
},
|
||||
actions: {
|
||||
setTranscript: assign({
|
||||
transcript: ({ event }) =>
|
||||
event.type === 'FINALIZED_PHRASE' ? event.phrase : '',
|
||||
}),
|
||||
appendPhrase: assign({
|
||||
transcript: ({ context, event }) =>
|
||||
event.type === 'FINALIZED_PHRASE'
|
||||
? context.transcript + (context.transcript ? ' ' : '') + event.phrase
|
||||
: context.transcript,
|
||||
}),
|
||||
clearTranscript: assign({
|
||||
transcript: '',
|
||||
}),
|
||||
setLastSpoken: assign({
|
||||
lastSpokenMessageId: ({ event }) =>
|
||||
event.type === 'AI_RESPONSE_RECEIVED' ? event.messageId : null,
|
||||
aiText: ({ event }) =>
|
||||
event.type === 'AI_RESPONSE_RECEIVED' ? event.text : null,
|
||||
}),
|
||||
setAudioUrl: assign({
|
||||
audioUrl: ({ event }) =>
|
||||
event.type === 'TTS_GENERATION_COMPLETE' ? event.audioUrl : null,
|
||||
}),
|
||||
clearAudio: assign({
|
||||
audioUrl: null,
|
||||
aiText: null,
|
||||
}),
|
||||
setError: assign({
|
||||
error: ({ event }) => (event.type === 'ERROR' ? event.message : null),
|
||||
}),
|
||||
clearError: assign({
|
||||
error: null,
|
||||
}),
|
||||
},
|
||||
}).createMachine({
|
||||
id: 'voice',
|
||||
initial: 'idle',
|
||||
context: {
|
||||
transcript: '',
|
||||
lastSpokenMessageId: null,
|
||||
error: null,
|
||||
audioUrl: null,
|
||||
aiText: null,
|
||||
},
|
||||
states: {
|
||||
idle: {
|
||||
tags: ['voiceIdle'],
|
||||
on: {
|
||||
START_VOICE: 'checkingForGreeting',
|
||||
STOP_VOICE: 'idle',
|
||||
},
|
||||
},
|
||||
|
||||
checkingForGreeting: {
|
||||
tags: ['checking'],
|
||||
// This state checks if there's an unspoken AI message
|
||||
// In React, an effect will check messages and send appropriate event
|
||||
on: {
|
||||
AI_RESPONSE_RECEIVED: {
|
||||
target: 'generatingTTS',
|
||||
actions: 'setLastSpoken',
|
||||
},
|
||||
START_LISTENING: 'listening',
|
||||
},
|
||||
},
|
||||
|
||||
listening: {
|
||||
tags: ['listening'],
|
||||
entry: ['clearTranscript', 'clearAudio'],
|
||||
on: {
|
||||
USER_STARTED_SPEAKING: 'userSpeaking',
|
||||
STOP_VOICE: 'idle',
|
||||
},
|
||||
},
|
||||
|
||||
userSpeaking: {
|
||||
tags: ['userSpeaking'],
|
||||
on: {
|
||||
FINALIZED_PHRASE: {
|
||||
target: 'userSpeaking',
|
||||
actions: 'appendPhrase',
|
||||
reenter: true,
|
||||
},
|
||||
UTTERANCE_END: 'timingOut',
|
||||
STOP_VOICE: 'idle',
|
||||
},
|
||||
},
|
||||
|
||||
timingOut: {
|
||||
tags: ['timingOut'],
|
||||
entry: () => console.log('[Voice Machine] Entered timingOut state, 3-second timer starting'),
|
||||
after: {
|
||||
3000: {
|
||||
target: 'submittingUser',
|
||||
actions: () => console.log('[Voice Machine] 3 seconds elapsed, transitioning to submittingUser'),
|
||||
},
|
||||
},
|
||||
on: {
|
||||
USER_STARTED_SPEAKING: 'userSpeaking', // User started talking again, cancel timeout
|
||||
// Don't handle FINALIZED_PHRASE here - just let the timer run
|
||||
STOP_VOICE: 'idle',
|
||||
},
|
||||
},
|
||||
|
||||
submittingUser: {
|
||||
tags: ['submitting'],
|
||||
// React effect submits the transcript
|
||||
on: {
|
||||
USER_MESSAGE_SUBMITTED: 'waitingForAI',
|
||||
ERROR: {
|
||||
target: 'idle',
|
||||
actions: 'setError',
|
||||
},
|
||||
STOP_VOICE: 'idle',
|
||||
},
|
||||
},
|
||||
|
||||
waitingForAI: {
|
||||
tags: ['waitingForAI'],
|
||||
// React effect polls/waits for AI response
|
||||
on: {
|
||||
AI_RESPONSE_RECEIVED: {
|
||||
target: 'generatingTTS',
|
||||
actions: 'setLastSpoken',
|
||||
},
|
||||
ERROR: {
|
||||
target: 'idle',
|
||||
actions: 'setError',
|
||||
},
|
||||
STOP_VOICE: 'idle',
|
||||
},
|
||||
},
|
||||
|
||||
generatingTTS: {
|
||||
tags: ['aiGenerating', 'canSkipAudio'],
|
||||
// React effect generates TTS
|
||||
on: {
|
||||
TTS_GENERATION_COMPLETE: {
|
||||
target: 'playingTTS',
|
||||
actions: 'setAudioUrl',
|
||||
},
|
||||
SKIP_AUDIO: 'listening',
|
||||
ERROR: {
|
||||
target: 'listening',
|
||||
actions: 'setError',
|
||||
},
|
||||
STOP_VOICE: 'idle',
|
||||
},
|
||||
},
|
||||
|
||||
playingTTS: {
|
||||
tags: ['aiSpeaking', 'canSkipAudio'],
|
||||
// React effect plays audio
|
||||
on: {
|
||||
TTS_PLAYBACK_FINISHED: 'listening',
|
||||
SKIP_AUDIO: 'listening',
|
||||
ERROR: {
|
||||
target: 'listening',
|
||||
actions: 'setError',
|
||||
},
|
||||
STOP_VOICE: 'idle',
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
@@ -23,6 +23,7 @@
|
||||
"@react-three/drei": "latest",
|
||||
"@react-three/fiber": "latest",
|
||||
"@tabler/icons-react": "^3.35.0",
|
||||
"@xstate/react": "^6.0.0",
|
||||
"ai": "latest",
|
||||
"jsonwebtoken": "latest",
|
||||
"next": "latest",
|
||||
@@ -32,9 +33,11 @@
|
||||
"surrealdb": "latest",
|
||||
"three": "latest",
|
||||
"umap-js": "latest",
|
||||
"xstate": "^5.24.0",
|
||||
"zod": "latest"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@playwright/test": "^1.56.1",
|
||||
"@types/jsonwebtoken": "latest",
|
||||
"@types/node": "latest",
|
||||
"@types/react": "latest",
|
||||
|
||||
244
plans/REVISED-remaining-work.md
Normal file
@@ -0,0 +1,244 @@
|
||||
# REVISED: Remaining Work for Ponderants
|
||||
|
||||
## What We Actually Have ✅
|
||||
|
||||
After reviewing the codebase, here's what's **already implemented**:
|
||||
|
||||
### Backend/API ✅
|
||||
- `/api/nodes` - Node creation with ATproto publishing + SurrealDB caching
|
||||
- `/api/suggest-links` - Vector similarity search for related nodes
|
||||
- `/api/calculate-graph` - UMAP dimensionality reduction for 3D coordinates
|
||||
- `/api/chat` - AI conversation endpoint
|
||||
- `/api/tts` - Text-to-speech generation
|
||||
- `/api/voice-token` - Deepgram token generation
|
||||
- Embedding generation (`lib/ai.ts`)
|
||||
- ATproto publishing (write-through cache pattern)
|
||||
|
||||
### Frontend Components ✅
|
||||
- `components/ThoughtGalaxy.tsx` - Full R3F 3D visualization
|
||||
- `app/galaxy/page.tsx` - Galaxy view page with graph calculation
|
||||
- `app/chat/page.tsx` - Chat interface with voice mode
|
||||
- Voice mode state machine (`lib/voice-machine.ts`) - FULLY TESTED
|
||||
- User authentication with OAuth
|
||||
- User profile menu
|
||||
|
||||
### Infrastructure ✅
|
||||
- SurrealDB schema with permissions
|
||||
- ATproto OAuth flow
|
||||
- Vector embeddings (gemini-embedding-001)
|
||||
- Graph relationships in SurrealDB
|
||||
|
||||
---
|
||||
|
||||
## What's Actually Missing ❌
|
||||
|
||||
### **~8-12 hours of focused work** across 3 main areas:
|
||||
|
||||
## 1. App-Level Navigation & State (3-4 hours)
|
||||
|
||||
### Missing:
|
||||
- App-level state machine to manage Convo ↔ Edit ↔ Galaxy transitions
|
||||
- Persistent navigation UI (bottom bar mobile, sidebar desktop)
|
||||
- Route navigation that respects app state
|
||||
|
||||
### Tasks:
|
||||
1. Create `lib/app-machine.ts` (XState)
|
||||
- States: `convo`, `edit`, `galaxy`
|
||||
- Context: `currentNodeId`, `mode`
|
||||
- Events: `NAVIGATE_TO_EDIT`, `NAVIGATE_TO_GALAXY`, `NAVIGATE_TO_CONVO`
|
||||
|
||||
2. Create `components/AppStateMachine.tsx` provider
|
||||
|
||||
3. Create `components/Navigation/MobileBottomBar.tsx`
|
||||
- 3 buttons: Convo, Edit, Galaxy
|
||||
- Fixed at bottom
|
||||
- Highlights active state
|
||||
|
||||
4. Create `components/Navigation/DesktopSidebar.tsx`
|
||||
- Vertical nav links
|
||||
- Same 3 modes
|
||||
|
||||
5. Update `app/layout.tsx` with Mantine `AppShell`
|
||||
- Responsive navigation
|
||||
- Wraps with AppStateMachine provider
|
||||
|
||||
**Acceptance Criteria:**
|
||||
- Can navigate between /chat, /edit, /galaxy
|
||||
- Active mode highlighted in nav
|
||||
- State persists across page refreshes
|
||||
- Works on mobile and desktop
|
||||
|
||||
---
|
||||
|
||||
## 2. Node Editor UI (3-4 hours)
|
||||
|
||||
### Missing:
|
||||
- Visual editor for node title/content
|
||||
- Link suggestion UI
|
||||
- Publish/Cancel/Continue buttons
|
||||
- Integration with existing `/api/nodes` and `/api/suggest-links`
|
||||
|
||||
### Tasks:
|
||||
1. Create `/app/edit/page.tsx`
|
||||
- Fetches current draft from app state
|
||||
- Shows editor form
|
||||
|
||||
2. Create `components/Edit/NodeEditor.tsx`
|
||||
- Mantine TextInput for title
|
||||
- Mantine Textarea (or RTE) for content
|
||||
- Markdown preview
|
||||
- Buttons: "Publish to ATproto", "Cancel", "Continue Conversation"
|
||||
|
||||
3. Create `components/Edit/LinkSuggestions.tsx`
|
||||
- Calls `/api/suggest-links` on content change (debounced)
|
||||
- Shows top 5 related nodes
|
||||
- Checkboxes to approve links
|
||||
- Shows similarity scores
|
||||
|
||||
4. Create `hooks/useNodeEditor.ts`
|
||||
- Mantine useForm integration
|
||||
- Handles publish flow
|
||||
- Manages link selection
|
||||
|
||||
**Acceptance Criteria:**
|
||||
- Can edit node title and content
|
||||
- Markdown preview works
|
||||
- Link suggestions appear based on content
|
||||
- "Publish" calls `/api/nodes` with approved links
|
||||
- "Cancel" discards draft, returns to /chat
|
||||
- "Continue" saves draft to state, returns to /chat
|
||||
|
||||
---
|
||||
|
||||
## 3. AI Node Suggestion Integration (2-3 hours)
|
||||
|
||||
### Missing:
|
||||
- AI detection that user wants to create a node
|
||||
- UI to show node suggestion in conversation
|
||||
- Flow from suggestion → draft → edit mode
|
||||
|
||||
### Tasks:
|
||||
1. Update AI system prompt in `/api/chat/route.ts`
|
||||
- Teach AI to suggest nodes when appropriate
|
||||
- Use structured output or tool calling
|
||||
|
||||
2. Create `components/Conversation/NodeSuggestionCard.tsx`
|
||||
- Shows suggested title/content from AI
|
||||
- "Save to Edit" button
|
||||
- "Dismiss" button
|
||||
- Appears inline in chat
|
||||
|
||||
3. Update `app/chat/page.tsx`
|
||||
- Detect node suggestions in AI responses
|
||||
- Show NodeSuggestionCard when detected
|
||||
- "Save to Edit" sets app state and navigates to /edit
|
||||
|
||||
4. (Optional) Add explicit "Save" button to chat UI
|
||||
- Always visible in bottom bar
|
||||
- Creates node from last N messages
|
||||
|
||||
**Acceptance Criteria:**
|
||||
- AI can suggest creating a node
|
||||
- Suggestion appears as a card in chat
|
||||
- "Save to Edit" transitions to edit mode with draft
|
||||
- Draft includes conversation context as content
|
||||
|
||||
---
|
||||
|
||||
## 4. Polish & Integration (1-2 hours)
|
||||
|
||||
### Missing:
|
||||
- Galaxy → Edit mode flow (click node to edit)
|
||||
- Edit → Galaxy flow (after publish, view in galaxy)
|
||||
- Error handling & loading states
|
||||
- Mobile responsiveness tweaks
|
||||
|
||||
### Tasks:
|
||||
1. Update `components/ThoughtGalaxy.tsx`
|
||||
- On node click: navigate to /edit with that node ID
|
||||
- Fetch node data in edit page
|
||||
|
||||
2. Add "View in Galaxy" button to edit page
|
||||
- After successful publish
|
||||
- Transitions to galaxy with smooth camera animation
|
||||
|
||||
3. Add loading states everywhere
|
||||
- Skeleton loaders for galaxy
|
||||
- Spinner during publish
|
||||
- Disabled states during operations
|
||||
|
||||
4. Mobile testing & fixes
|
||||
- Touch controls in galaxy
|
||||
- Bottom bar doesn't overlap content
|
||||
- Voice mode works on mobile
|
||||
|
||||
**Acceptance Criteria:**
|
||||
- Can click node in galaxy to edit it
|
||||
- After publishing, can view node in galaxy
|
||||
- All operations have loading feedback
|
||||
- Works smoothly on mobile
|
||||
|
||||
---
|
||||
|
||||
## Revised Total Estimate
|
||||
|
||||
| Area | Est. Hours |
|
||||
|------|------------|
|
||||
| App Navigation & State | 3-4 |
|
||||
| Node Editor UI | 3-4 |
|
||||
| AI Node Suggestions | 2-3 |
|
||||
| Polish & Integration | 1-2 |
|
||||
| **TOTAL** | **9-13 hours** |
|
||||
|
||||
---
|
||||
|
||||
## What We DON'T Need to Build
|
||||
|
||||
❌ Galaxy visualization (EXISTS)
|
||||
❌ Node creation API (EXISTS)
|
||||
❌ Vector search (EXISTS)
|
||||
❌ UMAP graph calculation (EXISTS)
|
||||
❌ Voice mode state machine (EXISTS & TESTED)
|
||||
❌ ATproto publishing (EXISTS)
|
||||
❌ Embedding generation (EXISTS)
|
||||
❌ SurrealDB schema (EXISTS)
|
||||
|
||||
---
|
||||
|
||||
## Implementation Priority
|
||||
|
||||
### Phase 1: Core Flow (P0) - 5-7 hours
|
||||
1. App state machine
|
||||
2. Navigation UI
|
||||
3. Node editor
|
||||
4. Basic node suggestion (even if manual)
|
||||
|
||||
### Phase 2: AI Integration (P1) - 2-3 hours
|
||||
5. AI-powered node suggestions
|
||||
6. Conversation context in drafts
|
||||
|
||||
### Phase 3: Polish (P2) - 1-2 hours
|
||||
7. Galaxy ↔ Edit integration
|
||||
8. Loading states
|
||||
9. Mobile fixes
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Start with Phase 1, Task 1**: Create `lib/app-machine.ts`
|
||||
2. **Then Task 2**: Build navigation UI
|
||||
3. **Then Task 3**: Build node editor
|
||||
|
||||
The rest can be done incrementally and tested along the way.
|
||||
|
||||
---
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
- **Manual testing with Playwright MCP** for each phase
|
||||
- **Update magnitude tests** as new features are added
|
||||
- **Focus on user flows**:
|
||||
- Convo → Suggestion → Edit → Publish → Galaxy
|
||||
- Galaxy → Click node → Edit → Publish
|
||||
- Voice mode → Node creation (should work seamlessly)
|
||||
530
plans/app-state-machine-architecture.md
Normal file
@@ -0,0 +1,530 @@
|
||||
# Ponderants App State Machine Architecture
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This document outlines the complete hierarchical state machine architecture for Ponderants, integrating the recently-completed Voice Mode with the full app experience: Conversation → Edit → Galaxy visualization.
|
||||
|
||||
**Current Status**: Voice Mode state machine is complete and tested (✅)
|
||||
**Remaining Work**: 3 major phases across ~15-20 implementation tasks
|
||||
|
||||
---
|
||||
|
||||
## 1. Current State (What We Have)
|
||||
|
||||
### ✅ Completed
|
||||
- **Voice Mode State Machine** (`lib/voice-machine.ts`)
|
||||
- States: idle, checkingForGreeting, listening, userSpeaking, timingOut, submittingUser, waitingForAI, generatingTTS, playingTTS
|
||||
- Fully tested with development controls
|
||||
- Clean XState v5 implementation
|
||||
|
||||
- **Chat Interface** (`app/chat/page.tsx`)
|
||||
- Text input with AI responses
|
||||
- Voice mode integration
|
||||
- Initial greeting message
|
||||
- User menu with logout
|
||||
|
||||
- **Authentication** (OAuth with Bluesky/ATproto)
|
||||
- **AI Integration** (Vercel AI SDK with Gemini)
|
||||
- **TTS** (Deepgram API)
|
||||
|
||||
### ❌ Missing
|
||||
- Node creation/extraction from conversation
|
||||
- Node editing interface
|
||||
- 3D galaxy visualization
|
||||
- App-level state management
|
||||
- Persistent navigation UI
|
||||
- ATproto publishing
|
||||
- Vector embeddings & linking
|
||||
|
||||
---
|
||||
|
||||
## 2. Hierarchical State Machine Architecture
|
||||
|
||||
### Level 1: App Machine (Top Level)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ APP MACHINE │
|
||||
│ │
|
||||
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
|
||||
│ │ Convo │ ←──→ │ Edit │ ←──→ │ Galaxy │ │
|
||||
│ │ │ │ │ │ │ │
|
||||
│ └────┬────┘ └─────────┘ └─────────┘ │
|
||||
│ │ │
|
||||
│ └─── Manages: voiceMode / textMode │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**States:**
|
||||
- `convo`: Active conversation (voice or text)
|
||||
- `edit`: Editing a node
|
||||
- `galaxy`: 3D visualization of node graph
|
||||
|
||||
**Context:**
|
||||
```typescript
|
||||
{
|
||||
currentNodeId: string | null;
|
||||
pendingNodeDraft: NodeDraft | null;
|
||||
nodes: Node[];
|
||||
mode: 'mobile' | 'desktop';
|
||||
}
|
||||
```
|
||||
|
||||
**Events:**
|
||||
- `EDIT_NODE` (from conversation or save button)
|
||||
- `VIEW_GALAXY` (from nav button)
|
||||
- `RETURN_TO_CONVO` (from nav button)
|
||||
- `PUBLISH_NODE` (from edit mode)
|
||||
- `CANCEL_EDIT`
|
||||
|
||||
### Level 2: Conversation Machine (Child of App.Convo)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ CONVERSATION MACHINE │
|
||||
│ │
|
||||
│ ┌─────────┐ ┌─────────┐ │
|
||||
│ │ Voice │ ←──────────────→ │ Text │ │
|
||||
│ │ │ │ │ │
|
||||
│ └────┬────┘ └─────────┘ │
|
||||
│ │ │
|
||||
│ └─── Embeds: voiceMachine (from lib/voice...) │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**States:**
|
||||
- `voice`: Voice conversation mode (invokes `voiceMachine`)
|
||||
- `text`: Text-only conversation mode
|
||||
|
||||
**Context:**
|
||||
```typescript
|
||||
{
|
||||
messages: Message[];
|
||||
suggestedNodes: NodeSuggestion[];
|
||||
}
|
||||
```
|
||||
|
||||
**Events:**
|
||||
- `TOGGLE_VOICE`
|
||||
- `TOGGLE_TEXT`
|
||||
- `SUGGEST_NODE` (from AI)
|
||||
- `CREATE_NODE` (user confirms suggestion)
|
||||
|
||||
### Level 3: Voice Machine (Existing - Child of Conversation.Voice)
|
||||
|
||||
Already implemented in `lib/voice-machine.ts`. No changes needed.
|
||||
|
||||
---
|
||||
|
||||
## 3. Data Model
|
||||
|
||||
### Node Schema
|
||||
|
||||
```typescript
|
||||
interface Node {
|
||||
id: string; // ATproto record URI
|
||||
title: string;
|
||||
content: string; // Markdown
|
||||
embedding: number[]; // gemini-embedding-001 (768 dims)
|
||||
links: {
|
||||
to: string; // Node ID
|
||||
strength: number; // 0-1, from vector similarity
|
||||
userApproved: boolean;
|
||||
}[];
|
||||
position3D: { x: number; y: number; z: number }; // UMAP coords
|
||||
createdAt: Date;
|
||||
updatedAt: Date;
|
||||
published: boolean; // Published to ATproto PDS
|
||||
}
|
||||
|
||||
interface NodeDraft {
|
||||
title: string;
|
||||
content: string;
|
||||
conversationContext: Message[]; // Last N messages
|
||||
}
|
||||
|
||||
interface NodeSuggestion {
|
||||
draft: NodeDraft;
|
||||
confidence: number; // AI's confidence in suggestion
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. UI Architecture
|
||||
|
||||
### Responsive Navigation
|
||||
|
||||
#### Mobile (< 768px)
|
||||
```
|
||||
┌─────────────────────────────────────┐
|
||||
│ App Content │
|
||||
│ │
|
||||
│ │
|
||||
│ │
|
||||
│ │
|
||||
├─────────────────────────────────────┤
|
||||
│ [Convo] [Edit] [Galaxy] │ ← Bottom Bar
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
#### Desktop (≥ 768px)
|
||||
```
|
||||
┌─────┬──────────────────────────────┐
|
||||
│ │ │
|
||||
│ C │ │
|
||||
│ o │ App Content │
|
||||
│ n │ │
|
||||
│ v │ │
|
||||
│ o │ │
|
||||
│ │ │
|
||||
│ E │ │
|
||||
│ d │ │
|
||||
│ i │ │
|
||||
│ t │ │
|
||||
│ │ │
|
||||
│ G │ │
|
||||
│ a │ │
|
||||
│ l │ │
|
||||
│ a │ │
|
||||
│ x │ │
|
||||
│ y │ │
|
||||
├─────┴──────────────────────────────┤
|
||||
│ User Menu │
|
||||
└────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Component Structure
|
||||
|
||||
```
|
||||
app/
|
||||
├── layout.tsx (with AppShell from Mantine)
|
||||
├── page.tsx (redirects to /chat)
|
||||
└── chat/
|
||||
└── page.tsx
|
||||
|
||||
components/
|
||||
├── AppStateMachine.tsx (Provides app-level state context)
|
||||
├── Navigation/
|
||||
│ ├── MobileBottomBar.tsx
|
||||
│ └── DesktopSidebar.tsx
|
||||
├── Conversation/
|
||||
│ ├── ConversationView.tsx (existing chat UI)
|
||||
│ ├── VoiceControls.tsx (extracted from page)
|
||||
│ ├── TextInput.tsx (extracted from page)
|
||||
│ └── NodeSuggestionCard.tsx (NEW - shows AI suggestion)
|
||||
├── Edit/
|
||||
│ ├── NodeEditor.tsx (NEW - Mantine form with RTE)
|
||||
│ └── LinkSuggestions.tsx (NEW - shows related nodes)
|
||||
└── Galaxy/
|
||||
├── GalaxyView.tsx (NEW - R3F canvas)
|
||||
├── NodeMesh.tsx (NEW - 3D node representation)
|
||||
└── ConnectionLines.tsx (NEW - edges between nodes)
|
||||
|
||||
lib/
|
||||
├── app-machine.ts (NEW - top-level state machine)
|
||||
├── conversation-machine.ts (NEW - voice/text toggle)
|
||||
└── voice-machine.ts (EXISTING ✅)
|
||||
|
||||
hooks/
|
||||
├── useAppMachine.ts (NEW)
|
||||
├── useConversationMode.ts (NEW)
|
||||
└── useVoiceMode.ts (EXISTING ✅)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Implementation Phases
|
||||
|
||||
### **Phase 1: App State Machine Foundation** (Est: 2-3 hours)
|
||||
|
||||
#### Tasks:
|
||||
1. Create `lib/app-machine.ts` with states: convo, edit, galaxy
|
||||
2. Create `components/AppStateMachine.tsx` provider
|
||||
3. Update `app/layout.tsx` to wrap with provider
|
||||
4. Create `hooks/useAppMachine.ts`
|
||||
|
||||
**Acceptance Criteria:**
|
||||
- Can transition between convo/edit/galaxy states
|
||||
- State persists across page navigations
|
||||
- Development panel shows current app state
|
||||
|
||||
---
|
||||
|
||||
### **Phase 2: Navigation UI** (Est: 2-3 hours)
|
||||
|
||||
#### Tasks:
|
||||
1. Create `components/Navigation/MobileBottomBar.tsx`
|
||||
- 3 buttons: Convo, Edit, Galaxy
|
||||
- Highlights active mode
|
||||
- Fixed position at bottom
|
||||
2. Create `components/Navigation/DesktopSidebar.tsx`
|
||||
- Vertical layout
|
||||
- Icons + labels
|
||||
- Mantine NavLink components
|
||||
3. Update `app/layout.tsx` with responsive navigation
|
||||
4. Add Mantine `AppShell` for layout management
|
||||
|
||||
**Acceptance Criteria:**
|
||||
- Navigation shows on all pages
|
||||
- Active state highlights correctly
|
||||
- Clicking nav triggers state machine events
|
||||
- Responsive (bottom bar mobile, sidebar desktop)
|
||||
|
||||
---
|
||||
|
||||
### **Phase 3: Node Creation Flow** (Est: 4-5 hours)
|
||||
|
||||
#### Tasks:
|
||||
1. Update AI system prompt to suggest nodes
|
||||
2. Create `components/Conversation/NodeSuggestionCard.tsx`
|
||||
- Shows AI-suggested node title/content
|
||||
- "Save to Edit" and "Dismiss" buttons
|
||||
3. Update `conversation-machine.ts` to handle:
|
||||
- `SUGGEST_NODE` event from AI response
|
||||
- `CREATE_NODE` event from user action
|
||||
4. Implement node suggestion detection in AI response
|
||||
5. Wire up "Save to Edit" → transitions to Edit mode
|
||||
|
||||
**Acceptance Criteria:**
|
||||
- AI can suggest creating a node during conversation
|
||||
- Suggestion appears as card in chat
|
||||
- Clicking "Save to Edit" transitions to edit mode with draft
|
||||
- Draft includes conversation context
|
||||
|
||||
---
|
||||
|
||||
### **Phase 4: Node Editor** (Est: 3-4 hours)
|
||||
|
||||
#### Tasks:
|
||||
1. Create `components/Edit/NodeEditor.tsx`
|
||||
- Title input (Mantine TextInput)
|
||||
- Content editor (Mantine RichTextEditor or Textarea with markdown preview)
|
||||
- "Publish" and "Cancel" buttons
|
||||
- "Continue Conversation" button
|
||||
2. Create `hooks/useNodeEditor.ts` (Mantine form)
|
||||
3. Implement publish flow:
|
||||
- Generate embedding (gemini-embedding-001)
|
||||
- Write to ATproto PDS
|
||||
- Cache in SurrealDB
|
||||
4. Create `components/Edit/LinkSuggestions.tsx`
|
||||
- Vector search for similar nodes
|
||||
- User can approve/reject links
|
||||
|
||||
**Acceptance Criteria:**
|
||||
- Can edit node title and content
|
||||
- Markdown preview works
|
||||
- "Publish" writes to ATproto + SurrealDB
|
||||
- "Cancel" discards changes, returns to convo
|
||||
- "Continue Conversation" saves draft, returns to convo
|
||||
- Link suggestions appear based on embeddings
|
||||
|
||||
---
|
||||
|
||||
### **Phase 5: Galaxy Visualization** (Est: 5-6 hours)
|
||||
|
||||
#### Tasks:
|
||||
1. Implement UMAP dimensionality reduction for nodes
|
||||
2. Create `components/Galaxy/GalaxyView.tsx`
|
||||
- R3F Canvas with OrbitControls
|
||||
- Dark space background
|
||||
- Camera setup
|
||||
3. Create `components/Galaxy/NodeMesh.tsx`
|
||||
- Sphere for each node
|
||||
- Size based on node importance
|
||||
- Color based on node age or category
|
||||
- On hover: show tooltip with title
|
||||
- On click: transition to Edit mode
|
||||
4. Create `components/Galaxy/ConnectionLines.tsx`
|
||||
- Lines between linked nodes
|
||||
- Opacity based on link strength
|
||||
5. Optimize rendering for 100+ nodes
|
||||
|
||||
**Acceptance Criteria:**
|
||||
- Nodes render in 3D space
|
||||
- Can orbit/zoom camera
|
||||
- Clicking node opens it in Edit mode
|
||||
- Links visible between related nodes
|
||||
- Smooth performance with 100+ nodes
|
||||
- Responsive (works on mobile)
|
||||
|
||||
---
|
||||
|
||||
### **Phase 6: Conversation Machine** (Est: 2-3 hours)
|
||||
|
||||
#### Tasks:
|
||||
1. Create `lib/conversation-machine.ts`
|
||||
- States: voice, text
|
||||
- Invokes `voiceMachine` in voice state
|
||||
2. Create `hooks/useConversationMode.ts`
|
||||
3. Refactor `app/chat/page.tsx` to use conversation machine
|
||||
4. Add voice/text toggle button
|
||||
|
||||
**Acceptance Criteria:**
|
||||
- Can toggle between voice and text modes
|
||||
- Voice mode properly invokes existing voiceMachine
|
||||
- State transitions are clean
|
||||
- Toggle button shows current mode
|
||||
|
||||
---
|
||||
|
||||
## 6. Remaining Work Breakdown
|
||||
|
||||
### By Feature Area
|
||||
|
||||
| Feature | Tasks | Est. Hours | Priority |
|
||||
|---------|-------|-----------|----------|
|
||||
| **App State Machine** | 4 | 2-3 | P0 (Foundation) |
|
||||
| **Navigation UI** | 4 | 2-3 | P0 (Foundation) |
|
||||
| **Node Creation** | 5 | 4-5 | P1 (Core Flow) |
|
||||
| **Node Editor** | 4 | 3-4 | P1 (Core Flow) |
|
||||
| **Galaxy Viz** | 5 | 5-6 | P1 (Core Flow) |
|
||||
| **Conversation Machine** | 4 | 2-3 | P2 (Enhancement) |
|
||||
| **Testing** | 6 | 3-4 | P0 (Ongoing) |
|
||||
| **ATproto Integration** | 3 | 2-3 | P1 (Core Flow) |
|
||||
| **Vector Search** | 2 | 2-3 | P1 (Core Flow) |
|
||||
|
||||
### Total Estimation
|
||||
- **Core Features**: 18-22 hours
|
||||
- **Testing & Polish**: 3-4 hours
|
||||
- **Total**: ~21-26 hours of focused development
|
||||
|
||||
### By Priority
|
||||
- **P0 (Must Have)**: App machine, Navigation, Testing infrastructure
|
||||
- **P1 (Core Value)**: Node creation, Editor, Galaxy, ATproto, Vector search
|
||||
- **P2 (Enhancement)**: Conversation machine (voice/text toggle)
|
||||
|
||||
---
|
||||
|
||||
## 7. Technical Considerations
|
||||
|
||||
### State Persistence
|
||||
- Use `localStorage` for app state persistence
|
||||
- Restore state on page reload
|
||||
- Clear state on logout
|
||||
|
||||
### Performance
|
||||
- Lazy load Galaxy view (code splitting)
|
||||
- Virtualize node list in large graphs
|
||||
- Debounce vector search queries
|
||||
- Memoize UMAP calculations
|
||||
|
||||
### Error Handling
|
||||
- Graceful fallback if ATproto write fails
|
||||
- Retry logic for network errors
|
||||
- User-friendly error messages
|
||||
- Rollback SurrealDB cache if PDS write fails
|
||||
|
||||
### Accessibility
|
||||
- Keyboard navigation for all UI
|
||||
- ARIA labels for state machine controls
|
||||
- Focus management on state transitions
|
||||
- Screen reader announcements
|
||||
|
||||
---
|
||||
|
||||
## 8. Success Metrics
|
||||
|
||||
### Must Pass Before Launch
|
||||
- [ ] All magnitude tests pass
|
||||
- [ ] Full user flow works: Convo → Node suggestion → Edit → Publish → Galaxy → View node
|
||||
- [ ] No TypeScript errors
|
||||
- [ ] Mobile and desktop layouts work
|
||||
- [ ] Data writes to ATproto PDS successfully
|
||||
- [ ] Vector search returns relevant results
|
||||
|
||||
### Quality Bars
|
||||
- [ ] State transitions are instant (< 100ms)
|
||||
- [ ] Galaxy renders smoothly (60fps with 100 nodes)
|
||||
- [ ] Voice mode integration doesn't break
|
||||
- [ ] No console errors or warnings
|
||||
|
||||
---
|
||||
|
||||
## 9. Next Steps
|
||||
|
||||
### Immediate (Start Here)
|
||||
1. **Review this plan with user** - Confirm priorities and scope
|
||||
2. **Create app-machine.ts** - Foundation for everything
|
||||
3. **Build navigation UI** - Visual feedback for state changes
|
||||
4. **Implement node suggestion detection** - Start extracting value from conversations
|
||||
|
||||
### Short Term (This Week)
|
||||
- Complete Phase 1 & 2 (App machine + Navigation)
|
||||
- Begin Phase 3 (Node creation flow)
|
||||
- Write magnitude tests for new flows
|
||||
|
||||
### Medium Term (Next Week)
|
||||
- Complete Phase 4 (Editor)
|
||||
- Complete Phase 5 (Galaxy)
|
||||
- Integration testing
|
||||
|
||||
---
|
||||
|
||||
## 10. Risk Assessment
|
||||
|
||||
### Low Risk ✅
|
||||
- App state machine (similar to voice machine)
|
||||
- Navigation UI (standard Mantine components)
|
||||
- Node editor (forms and RTE)
|
||||
|
||||
### Medium Risk ⚠️
|
||||
- ATproto publishing (OAuth flow works, but write API untested)
|
||||
- Vector embeddings (API calls should work, but scale unknown)
|
||||
- UMAP dimensionality reduction (library integration)
|
||||
|
||||
### High Risk 🔴
|
||||
- Galaxy performance on mobile (R3F can be heavy)
|
||||
- Node suggestion detection from AI (prompt engineering needed)
|
||||
- Link suggestion accuracy (depends on embedding quality)
|
||||
|
||||
### Mitigation Strategies
|
||||
- **Galaxy**: Start with simple spheres, add detail later. Implement LOD.
|
||||
- **Node Detection**: Use structured output from Gemini if freeform fails
|
||||
- **Links**: Allow manual link creation as fallback
|
||||
|
||||
---
|
||||
|
||||
## Appendix A: File Tree (Post-Implementation)
|
||||
|
||||
```
|
||||
app/
|
||||
├── layout.tsx (AppShell + AppStateMachine provider)
|
||||
├── page.tsx (redirect to /chat)
|
||||
├── chat/page.tsx (Conversation view)
|
||||
├── edit/page.tsx (Node editor view)
|
||||
└── galaxy/page.tsx (3D visualization view)
|
||||
|
||||
components/
|
||||
├── AppStateMachine.tsx
|
||||
├── Navigation/
|
||||
│ ├── MobileBottomBar.tsx
|
||||
│ └── DesktopSidebar.tsx
|
||||
├── Conversation/
|
||||
│ ├── ConversationView.tsx
|
||||
│ ├── VoiceControls.tsx
|
||||
│ ├── TextInput.tsx
|
||||
│ └── NodeSuggestionCard.tsx
|
||||
├── Edit/
|
||||
│ ├── NodeEditor.tsx
|
||||
│ └── LinkSuggestions.tsx
|
||||
└── Galaxy/
|
||||
├── GalaxyView.tsx
|
||||
├── NodeMesh.tsx
|
||||
└── ConnectionLines.tsx
|
||||
|
||||
lib/
|
||||
├── app-machine.ts (NEW)
|
||||
├── conversation-machine.ts (NEW)
|
||||
└── voice-machine.ts (EXISTING ✅)
|
||||
|
||||
hooks/
|
||||
├── useAppMachine.ts (NEW)
|
||||
├── useConversationMode.ts (NEW)
|
||||
└── useVoiceMode.ts (EXISTING ✅)
|
||||
|
||||
api/
|
||||
├── nodes/route.ts (CRUD for nodes)
|
||||
├── embeddings/route.ts (Generate embeddings)
|
||||
└── links/route.ts (Vector search for suggestions)
|
||||
```
|
||||
40
playwright.config.ts
Normal file
@@ -0,0 +1,40 @@
|
||||
import { defineConfig, devices } from '@playwright/test';
|
||||
|
||||
export default defineConfig({
|
||||
testDir: './tests',
|
||||
fullyParallel: true,
|
||||
forbidOnly: !!process.env.CI,
|
||||
retries: process.env.CI ? 2 : 0,
|
||||
workers: process.env.CI ? 1 : undefined,
|
||||
reporter: 'html',
|
||||
use: {
|
||||
baseURL: 'http://localhost:3000',
|
||||
trace: 'on-first-retry',
|
||||
},
|
||||
|
||||
projects: [
|
||||
// Setup project
|
||||
{
|
||||
name: 'setup',
|
||||
testMatch: /.*\.setup\.ts/,
|
||||
},
|
||||
|
||||
// Chromium tests using authenticated state
|
||||
{
|
||||
name: 'chromium',
|
||||
use: {
|
||||
...devices['Desktop Chrome'],
|
||||
// Use the saved authenticated state
|
||||
storageState: '.playwright/.auth/user.json',
|
||||
},
|
||||
dependencies: ['setup'],
|
||||
},
|
||||
],
|
||||
|
||||
// Run dev server before tests
|
||||
webServer: {
|
||||
command: 'pnpm dev',
|
||||
url: 'http://localhost:3000',
|
||||
reuseExistingServer: !process.env.CI,
|
||||
},
|
||||
});
|
||||
47
pnpm-lock.yaml
generated
@@ -47,6 +47,9 @@ importers:
|
||||
'@tabler/icons-react':
|
||||
specifier: ^3.35.0
|
||||
version: 3.35.0(react@19.2.0)
|
||||
'@xstate/react':
|
||||
specifier: ^6.0.0
|
||||
version: 6.0.0(@types/react@19.2.2)(react@19.2.0)(xstate@5.24.0)
|
||||
ai:
|
||||
specifier: latest
|
||||
version: 5.0.89(zod@4.1.12)
|
||||
@@ -55,7 +58,7 @@ importers:
|
||||
version: 9.0.2
|
||||
next:
|
||||
specifier: latest
|
||||
version: 16.0.1(@babel/core@7.28.5)(@opentelemetry/api@1.9.0)(react-dom@19.2.0(react@19.2.0))(react@19.2.0)
|
||||
version: 16.0.1(@babel/core@7.28.5)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@19.2.0(react@19.2.0))(react@19.2.0)
|
||||
openid-client:
|
||||
specifier: latest
|
||||
version: 6.8.1
|
||||
@@ -74,10 +77,16 @@ importers:
|
||||
umap-js:
|
||||
specifier: latest
|
||||
version: 1.4.0
|
||||
xstate:
|
||||
specifier: ^5.24.0
|
||||
version: 5.24.0
|
||||
zod:
|
||||
specifier: latest
|
||||
version: 4.1.12
|
||||
devDependencies:
|
||||
'@playwright/test':
|
||||
specifier: ^1.56.1
|
||||
version: 1.56.1
|
||||
'@types/jsonwebtoken':
|
||||
specifier: latest
|
||||
version: 9.0.10
|
||||
@@ -991,6 +1000,11 @@ packages:
|
||||
'@pinojs/redact@0.4.0':
|
||||
resolution: {integrity: sha512-k2ENnmBugE/rzQfEcdWHcCY+/FM3VLzH9cYEsbdsoqrvzAKRhUZeRNhAZvB8OitQJ1TBed3yqWtdjzS6wJKBwg==}
|
||||
|
||||
'@playwright/test@1.56.1':
|
||||
resolution: {integrity: sha512-vSMYtL/zOcFpvJCW71Q/OEGQb7KYBPAdKh35WNSkaZA75JlAO8ED8UN6GUNTm3drWomcbcqRPFqQbLae8yBTdg==}
|
||||
engines: {node: '>=18'}
|
||||
hasBin: true
|
||||
|
||||
'@posthog/core@1.5.2':
|
||||
resolution: {integrity: sha512-iedUP3EnOPPxTA2VaIrsrd29lSZnUV+ZrMnvY56timRVeZAXoYCkmjfIs3KBAsF8OUT5h1GXLSkoQdrV0r31OQ==}
|
||||
|
||||
@@ -1285,6 +1299,15 @@ packages:
|
||||
'@webgpu/types@0.1.66':
|
||||
resolution: {integrity: sha512-YA2hLrwLpDsRueNDXIMqN9NTzD6bCDkuXbOSe0heS+f8YE8usA6Gbv1prj81pzVHrbaAma7zObnIC+I6/sXJgA==}
|
||||
|
||||
'@xstate/react@6.0.0':
|
||||
resolution: {integrity: sha512-xXlLpFJxqLhhmecAXclBECgk+B4zYSrDTl8hTfPZBogkn82OHKbm9zJxox3Z/YXoOhAQhKFTRLMYGdlbhc6T9A==}
|
||||
peerDependencies:
|
||||
react: ^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0
|
||||
xstate: ^5.20.0
|
||||
peerDependenciesMeta:
|
||||
xstate:
|
||||
optional: true
|
||||
|
||||
acorn-jsx@5.3.2:
|
||||
resolution: {integrity: sha512-rq9s+JNhf0IChjtDXxllJ7g41oZk5SlXtp0LHwyA5cejwn7vKmKp4pPri6YEePv2PU65sAsegbXtIinmDFDXgQ==}
|
||||
peerDependencies:
|
||||
@@ -3381,6 +3404,9 @@ packages:
|
||||
utf-8-validate:
|
||||
optional: true
|
||||
|
||||
xstate@5.24.0:
|
||||
resolution: {integrity: sha512-h/213ThFfZbOefUWrLc9ZvYggEVBr0jrD2dNxErxNMLQfZRN19v+80TaXFho17hs8Q2E1mULtm/6nv12um0C4A==}
|
||||
|
||||
yallist@3.1.1:
|
||||
resolution: {integrity: sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g==}
|
||||
|
||||
@@ -4255,6 +4281,10 @@ snapshots:
|
||||
|
||||
'@pinojs/redact@0.4.0': {}
|
||||
|
||||
'@playwright/test@1.56.1':
|
||||
dependencies:
|
||||
playwright: 1.56.1
|
||||
|
||||
'@posthog/core@1.5.2':
|
||||
dependencies:
|
||||
cross-spawn: 7.0.6
|
||||
@@ -4562,6 +4592,16 @@ snapshots:
|
||||
|
||||
'@webgpu/types@0.1.66': {}
|
||||
|
||||
'@xstate/react@6.0.0(@types/react@19.2.2)(react@19.2.0)(xstate@5.24.0)':
|
||||
dependencies:
|
||||
react: 19.2.0
|
||||
use-isomorphic-layout-effect: 1.2.1(@types/react@19.2.2)(react@19.2.0)
|
||||
use-sync-external-store: 1.6.0(react@19.2.0)
|
||||
optionalDependencies:
|
||||
xstate: 5.24.0
|
||||
transitivePeerDependencies:
|
||||
- '@types/react'
|
||||
|
||||
acorn-jsx@5.3.2(acorn@8.15.0):
|
||||
dependencies:
|
||||
acorn: 8.15.0
|
||||
@@ -5964,7 +6004,7 @@ snapshots:
|
||||
|
||||
natural-compare@1.4.0: {}
|
||||
|
||||
next@16.0.1(@babel/core@7.28.5)(@opentelemetry/api@1.9.0)(react-dom@19.2.0(react@19.2.0))(react@19.2.0):
|
||||
next@16.0.1(@babel/core@7.28.5)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@19.2.0(react@19.2.0))(react@19.2.0):
|
||||
dependencies:
|
||||
'@next/env': 16.0.1
|
||||
'@swc/helpers': 0.5.15
|
||||
@@ -5983,6 +6023,7 @@ snapshots:
|
||||
'@next/swc-win32-arm64-msvc': 16.0.1
|
||||
'@next/swc-win32-x64-msvc': 16.0.1
|
||||
'@opentelemetry/api': 1.9.0
|
||||
'@playwright/test': 1.56.1
|
||||
sharp: 0.34.5
|
||||
transitivePeerDependencies:
|
||||
- '@babel/core'
|
||||
@@ -7004,6 +7045,8 @@ snapshots:
|
||||
bufferutil: 4.0.9
|
||||
utf-8-validate: 6.0.5
|
||||
|
||||
xstate@5.24.0: {}
|
||||
|
||||
yallist@3.1.1: {}
|
||||
|
||||
yocto-queue@0.1.0: {}
|
||||
|
||||
16
public/logo.svg
Normal file
@@ -0,0 +1,16 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 60 100" width="256" height="256" aria-labelledby="logoTitle">
|
||||
<title id="logoTitle">Woven abstract logo of communication waves (Vertical)</title>
|
||||
|
||||
<g fill-rule="evenodd" fill="none" transform="translate(-18, 20) rotate(90, 48, 30) translate(0, 60) scale(1, -1)">
|
||||
|
||||
<!-- Jagged wave - Left end segment (draw first, appears behind) --><path stroke="#AAAAAA" stroke-width="1" stroke-linecap="round" fill="none" d="M 42 30 L 45 26.25" />
|
||||
|
||||
<!-- Jagged wave - Right end segment (draw second, appears behind) --><path stroke="#AAAAAA" stroke-width="1" stroke-linecap="round" fill="none" d="M 51 33.75 L 54 30" />
|
||||
|
||||
<!-- Curved wave (drawn third, appears in middle layer) --><path stroke="#444444" stroke-width="1" stroke-linecap="round" fill="none" d="M 42 30 Q 45 37.5, 48 30 Q 51 22.5, 54 30" />
|
||||
|
||||
<!-- Jagged wave - Center segment (drawn last, appears in front) --><path stroke="#AAAAAA" stroke-width="1" stroke-linecap="round" fill="none" d="M 45 26.25 L 48 30 L 51 33.75" />
|
||||
|
||||
</g>
|
||||
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 1.0 KiB |
177
tests/README.md
Normal file
@@ -0,0 +1,177 @@
|
||||
# Ponderants Test Suite
|
||||
|
||||
This directory contains all automated and manual tests for the Ponderants application.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
tests/
|
||||
├── magnitude/ # Magnitude.run automated tests
|
||||
│ └── node-publishing.mag.ts
|
||||
├── helpers/ # Reusable test utilities
|
||||
│ ├── playwright-helpers.ts
|
||||
│ └── README.md
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Test Frameworks
|
||||
|
||||
### Magnitude.run
|
||||
AI-powered end-to-end testing framework that uses vision to interact with the browser like a human.
|
||||
|
||||
**Run tests:**
|
||||
```bash
|
||||
pnpm test
|
||||
# or
|
||||
npx magnitude
|
||||
```
|
||||
|
||||
**Test files:** `*.mag.ts` in the `magnitude/` directory
|
||||
|
||||
### Playwright MCP
|
||||
Interactive browser automation for manual testing and debugging.
|
||||
|
||||
**Usage:** Use the Playwright MCP tools with helper functions from `helpers/playwright-helpers.ts`
|
||||
|
||||
## Test Coverage
|
||||
|
||||
### Node Publishing Flow (`magnitude/node-publishing.mag.ts`)
|
||||
|
||||
**Happy Path Tests:**
|
||||
- ✅ User can publish a node from conversation
|
||||
- ✅ User can edit node draft before publishing
|
||||
- ✅ User can cancel node draft without publishing
|
||||
|
||||
**Unhappy Path Tests:**
|
||||
- ✅ Cannot publish node without authentication
|
||||
- ✅ Cannot publish node with empty title
|
||||
- ✅ Cannot publish node with empty content
|
||||
- ✅ Shows error notification if publish fails
|
||||
- ✅ Handles long content with truncation
|
||||
- ✅ Shows warning when cache fails but publish succeeds
|
||||
|
||||
**Integration Tests:**
|
||||
- ✅ Complete user journey: Login → Converse → Publish → View
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Install Dependencies
|
||||
```bash
|
||||
pnpm install
|
||||
```
|
||||
|
||||
### 2. Configure Test Environment
|
||||
```bash
|
||||
cp .env.test.example .env.test
|
||||
```
|
||||
|
||||
Edit `.env.test` and add your test credentials:
|
||||
```env
|
||||
TEST_BLUESKY_USERNAME=your-test-user.bsky.social
|
||||
TEST_BLUESKY_PASSWORD=your-test-password
|
||||
```
|
||||
|
||||
**Important:** Use a dedicated test account, not your personal account!
|
||||
|
||||
### 3. Start Development Server
|
||||
```bash
|
||||
pnpm dev
|
||||
```
|
||||
|
||||
The test server must be running on `http://localhost:3000` before running tests.
|
||||
|
||||
### 4. Run Tests
|
||||
```bash
|
||||
pnpm test
|
||||
```
|
||||
|
||||
## Writing New Tests
|
||||
|
||||
### Using Magnitude
|
||||
```typescript
|
||||
import { test } from 'magnitude-test';
|
||||
|
||||
test('Test description', async (agent) => {
|
||||
await agent.open('http://localhost:3000');
|
||||
await agent.act('Describe the action');
|
||||
await agent.check('Verify the result');
|
||||
});
|
||||
```
|
||||
|
||||
### Using Helpers
|
||||
```typescript
|
||||
import { test } from 'magnitude-test';
|
||||
import { loginWithBluesky, startConversation } from '../helpers/playwright-helpers';
|
||||
|
||||
test('Test with helpers', async (agent) => {
|
||||
const page = agent.page;
|
||||
|
||||
await loginWithBluesky(page);
|
||||
await startConversation(page, 'My test message');
|
||||
|
||||
await agent.check('Expected result');
|
||||
});
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Test Isolation:** Each test should be independent and not rely on previous tests
|
||||
2. **Test Data:** Use dedicated test accounts and test data
|
||||
3. **Cleanup:** Clean up any created data after tests (nodes, conversations)
|
||||
4. **Error Handling:** Test both happy paths and error scenarios
|
||||
5. **Documentation:** Comment complex test logic and edge cases
|
||||
6. **Reusability:** Use helper functions for common flows
|
||||
7. **Readability:** Use descriptive test names and clear assertions
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
Tests should run on every pull request:
|
||||
|
||||
```yaml
|
||||
# .github/workflows/test.yml
|
||||
- name: Run tests
|
||||
run: pnpm test
|
||||
```
|
||||
|
||||
## Debugging Tests
|
||||
|
||||
### View Test Execution
|
||||
Magnitude.run provides visual feedback during test execution.
|
||||
|
||||
### Interactive Testing with Playwright MCP
|
||||
Use Playwright MCP tools for step-by-step debugging:
|
||||
|
||||
```typescript
|
||||
import { loginWithBluesky } from './tests/helpers/playwright-helpers';
|
||||
|
||||
// In MCP session
|
||||
await loginWithBluesky(page, {
|
||||
username: 'test-user.bsky.social',
|
||||
password: 'test-password'
|
||||
});
|
||||
```
|
||||
|
||||
### Check Server Logs
|
||||
Monitor the dev server output for API errors:
|
||||
```bash
|
||||
pnpm dev
|
||||
# Watch for [POST /api/nodes] logs
|
||||
```
|
||||
|
||||
## Known Issues
|
||||
|
||||
1. **OAuth Rate Limiting:** Repeated login tests may hit Bluesky rate limits
|
||||
- Solution: Use fewer login tests or implement session caching
|
||||
|
||||
2. **AI Response Times:** Chat tests may timeout on slow responses
|
||||
- Solution: Increase `waitForAIResponse` timeout
|
||||
|
||||
3. **Cache Failures:** SurrealDB cache may fail but shouldn't break tests
|
||||
- Expected: Tests should still pass with warning notifications
|
||||
|
||||
## Resources
|
||||
|
||||
- [Magnitude.run Documentation](https://magnitude.run/docs)
|
||||
- [Playwright Documentation](https://playwright.dev)
|
||||
- [ATProto OAuth Spec](https://atproto.com/specs/oauth)
|
||||
- [Bluesky API Docs](https://docs.bsky.app)
|
||||
34
tests/auth.setup.ts
Normal file
@@ -0,0 +1,34 @@
|
||||
import { test as setup, expect } from '@playwright/test';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import path from 'node:path';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
const authFile = path.join(__dirname, '../.playwright/.auth/user.json');
|
||||
|
||||
setup('authenticate', async ({ page }) => {
|
||||
// Navigate to login page
|
||||
await page.goto('http://localhost:3000/login');
|
||||
|
||||
// Fill in the Bluesky handle
|
||||
await page.getByPlaceholder('e.g., yourname.bsky.social').fill('aprongecko.bsky.social');
|
||||
|
||||
// Click login button
|
||||
await page.getByRole('button', { name: 'Log in with Bluesky' }).click();
|
||||
|
||||
// Wait for OAuth redirect and handle it
|
||||
// This will open the Bluesky authorization page
|
||||
// In a real test, you would need to handle the OAuth flow
|
||||
// For now, we'll wait for the callback
|
||||
await page.waitForURL(/callback/, { timeout: 30000 });
|
||||
|
||||
// After successful auth, should redirect to chat
|
||||
await page.waitForURL(/chat/, { timeout: 10000 });
|
||||
|
||||
// Verify we're logged in
|
||||
await expect(page.getByText('Ponderants Interview')).toBeVisible();
|
||||
|
||||
// Save authenticated state
|
||||
await page.context().storageState({ path: authFile });
|
||||
});
|
||||
77
tests/helpers/README.md
Normal file
@@ -0,0 +1,77 @@
|
||||
# Playwright Test Helpers
|
||||
|
||||
This directory contains reusable helper functions for both Magnitude tests and Playwright MCP testing.
|
||||
|
||||
## Usage
|
||||
|
||||
### In Magnitude Tests
|
||||
|
||||
```typescript
|
||||
import { test } from 'magnitude-test';
|
||||
import {
|
||||
navigateToApp,
|
||||
loginWithBluesky,
|
||||
startConversation,
|
||||
createNodeDraft,
|
||||
publishNode,
|
||||
completeNodePublishFlow,
|
||||
} from '../helpers/playwright-helpers';
|
||||
|
||||
test('User can publish a node', async (agent) => {
|
||||
const page = agent.page; // Get Playwright page from Magnitude agent
|
||||
|
||||
await completeNodePublishFlow(page, 'My test conversation');
|
||||
|
||||
await agent.check('Node published successfully');
|
||||
});
|
||||
```
|
||||
|
||||
### In Playwright MCP
|
||||
|
||||
Use the helper functions directly with the Playwright MCP page instance:
|
||||
|
||||
```typescript
|
||||
import { loginWithBluesky, startConversation } from './tests/helpers/playwright-helpers';
|
||||
|
||||
// In your MCP session
|
||||
await loginWithBluesky(page);
|
||||
await startConversation(page, 'Hello AI');
|
||||
```
|
||||
|
||||
## Available Helpers
|
||||
|
||||
### Authentication
|
||||
- `loginWithBluesky(page, credentials?)` - Complete OAuth login flow
|
||||
- `logout(page)` - Logout from application
|
||||
- `isLoggedIn(page)` - Check if user is authenticated
|
||||
|
||||
### Navigation
|
||||
- `navigateToApp(page, baseUrl?)` - Go to app home page
|
||||
|
||||
### Conversation
|
||||
- `startConversation(page, message)` - Send first message in chat
|
||||
- `waitForAIResponse(page, timeout?)` - Wait for AI to finish responding
|
||||
|
||||
### Node Publishing
|
||||
- `createNodeDraft(page)` - Click "Create Node" button
|
||||
- `editNodeDraft(page, title?, content?)` - Modify draft before publishing
|
||||
- `publishNode(page)` - Click "Publish" and wait for success
|
||||
- `completeNodePublishFlow(page, message?, credentials?)` - Full end-to-end flow
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Set these in your `.env.test` file:
|
||||
|
||||
```env
|
||||
TEST_BLUESKY_USERNAME=your-test-user.bsky.social
|
||||
TEST_BLUESKY_PASSWORD=your-test-password
|
||||
```
|
||||
|
||||
If not set, helpers will use default values (which will fail for real OAuth).
|
||||
|
||||
## Design Principles
|
||||
|
||||
1. **Reusability** - Each helper is atomic and can be composed
|
||||
2. **Reliability** - Helpers wait for network and UI state before proceeding
|
||||
3. **Flexibility** - Optional parameters allow customization
|
||||
4. **Error Handling** - Helpers throw clear errors when expectations aren't met
|
||||
216
tests/helpers/playwright-helpers.ts
Normal file
@@ -0,0 +1,216 @@
|
||||
/**
|
||||
* Playwright Helper Functions
|
||||
*
|
||||
* Reusable test utilities for both Magnitude tests and Playwright MCP.
|
||||
* These helpers encapsulate common test flows to reduce duplication.
|
||||
*/
|
||||
|
||||
import type { Page } from '@playwright/test';
|
||||
|
||||
export interface TestCredentials {
|
||||
username: string;
|
||||
password: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Default test credentials for Bluesky OAuth login.
|
||||
* In production tests, these should come from environment variables.
|
||||
*/
|
||||
export const DEFAULT_TEST_CREDENTIALS: TestCredentials = {
|
||||
username: process.env.TEST_BLUESKY_USERNAME || 'test-user.bsky.social',
|
||||
password: process.env.TEST_BLUESKY_PASSWORD || 'test-password',
|
||||
};
|
||||
|
||||
/**
|
||||
* Navigate to the application home page and wait for it to load.
|
||||
*/
|
||||
export async function navigateToApp(page: Page, baseUrl: string = 'http://localhost:3000'): Promise<void> {
|
||||
await page.goto(baseUrl);
|
||||
await page.waitForLoadState('networkidle');
|
||||
}
|
||||
|
||||
/**
|
||||
* Complete the Bluesky OAuth login flow.
|
||||
*
|
||||
* This function:
|
||||
* 1. Clicks the "Log in with Bluesky" button
|
||||
* 2. Waits for Bluesky OAuth page to load
|
||||
* 3. Fills in credentials
|
||||
* 4. Submits the form
|
||||
* 5. Waits for redirect back to the app
|
||||
*
|
||||
* @param page - Playwright page instance
|
||||
* @param credentials - Optional custom credentials (defaults to DEFAULT_TEST_CREDENTIALS)
|
||||
*/
|
||||
export async function loginWithBluesky(
|
||||
page: Page,
|
||||
credentials: TestCredentials = DEFAULT_TEST_CREDENTIALS
|
||||
): Promise<void> {
|
||||
// Click login button
|
||||
await page.click('text="Log in with Bluesky"');
|
||||
|
||||
// Wait for Bluesky OAuth page
|
||||
await page.waitForURL(/bsky\.social|bsky\.app/);
|
||||
|
||||
// Fill in credentials
|
||||
await page.fill('input[name="identifier"]', credentials.username);
|
||||
await page.fill('input[name="password"]', credentials.password);
|
||||
|
||||
// Submit login form
|
||||
await page.click('button[type="submit"]');
|
||||
|
||||
// Wait for redirect back to app
|
||||
await page.waitForURL(/localhost:3000/);
|
||||
await page.waitForLoadState('networkidle');
|
||||
}
|
||||
|
||||
/**
|
||||
* Start a new conversation by sending a message in the chat interface.
|
||||
*
|
||||
* @param page - Playwright page instance
|
||||
* @param message - The message to send
|
||||
*/
|
||||
export async function startConversation(page: Page, message: string): Promise<void> {
|
||||
// Find the chat input (textarea or input field)
|
||||
const chatInput = page.locator('textarea, input[type="text"]').first();
|
||||
|
||||
// Type the message
|
||||
await chatInput.fill(message);
|
||||
|
||||
// Submit (press Enter or click send button)
|
||||
await chatInput.press('Enter');
|
||||
|
||||
// Wait for AI response to appear
|
||||
await page.waitForSelector('text=/AI|Assistant|thinking/i', { timeout: 10000 });
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a node draft from the current conversation.
|
||||
*
|
||||
* This assumes:
|
||||
* - User is already in a conversation
|
||||
* - The "Create Node" button is visible
|
||||
*/
|
||||
export async function createNodeDraft(page: Page): Promise<void> {
|
||||
// Click "Create Node" button
|
||||
await page.click('text="Create Node"');
|
||||
|
||||
// Wait for navigation to edit page
|
||||
await page.waitForURL(/\/edit/);
|
||||
await page.waitForLoadState('networkidle');
|
||||
|
||||
// Verify we're on the edit page with draft content
|
||||
await page.waitForSelector('input[value]:not([value=""])', { timeout: 5000 });
|
||||
}
|
||||
|
||||
/**
|
||||
* Publish a node from the edit page.
|
||||
*
|
||||
* This assumes:
|
||||
* - User is already on the /edit page
|
||||
* - Draft content is loaded
|
||||
*/
|
||||
export async function publishNode(page: Page): Promise<void> {
|
||||
// Click "Publish Node" button
|
||||
await page.click('text="Publish Node"');
|
||||
|
||||
// Wait for success notification
|
||||
await page.waitForSelector('text=/Node published|success/i', { timeout: 15000 });
|
||||
|
||||
// Wait for navigation back to conversation view
|
||||
await page.waitForURL(/\/chat|\/conversation/, { timeout: 5000 });
|
||||
}
|
||||
|
||||
/**
|
||||
* Complete end-to-end flow: Login → Conversation → Create Node → Publish
|
||||
*
|
||||
* This is the "happy path" test flow for the core user journey.
|
||||
*
|
||||
* @param page - Playwright page instance
|
||||
* @param message - Message to start conversation with
|
||||
* @param credentials - Optional custom credentials
|
||||
*/
|
||||
export async function completeNodePublishFlow(
|
||||
page: Page,
|
||||
message: string = 'Test conversation for node creation',
|
||||
credentials: TestCredentials = DEFAULT_TEST_CREDENTIALS
|
||||
): Promise<void> {
|
||||
await navigateToApp(page);
|
||||
await loginWithBluesky(page, credentials);
|
||||
await startConversation(page, message);
|
||||
await createNodeDraft(page);
|
||||
await publishNode(page);
|
||||
}
|
||||
|
||||
/**
|
||||
* Edit node draft content before publishing.
|
||||
*
|
||||
* @param page - Playwright page instance
|
||||
* @param title - New title (undefined to skip)
|
||||
* @param content - New content (undefined to skip)
|
||||
*/
|
||||
export async function editNodeDraft(
|
||||
page: Page,
|
||||
title?: string,
|
||||
content?: string
|
||||
): Promise<void> {
|
||||
if (title !== undefined) {
|
||||
const titleInput = page.locator('input[label="Title"], input[placeholder*="title" i]').first();
|
||||
await titleInput.fill(title);
|
||||
}
|
||||
|
||||
if (content !== undefined) {
|
||||
const contentInput = page.locator('textarea[label="Content"], textarea[placeholder*="content" i]').first();
|
||||
await contentInput.fill(content);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Wait for AI to finish responding in the chat.
|
||||
*
|
||||
* This looks for the typing indicator to disappear.
|
||||
*/
|
||||
export async function waitForAIResponse(page: Page, timeoutMs: number = 30000): Promise<void> {
|
||||
// Wait for typing indicator to appear
|
||||
await page.waitForSelector('text=/thinking|typing|generating/i', {
|
||||
timeout: 5000,
|
||||
state: 'visible'
|
||||
}).catch(() => {
|
||||
// Indicator might appear and disappear quickly, that's okay
|
||||
});
|
||||
|
||||
// Wait for it to disappear
|
||||
await page.waitForSelector('text=/thinking|typing|generating/i', {
|
||||
timeout: timeoutMs,
|
||||
state: 'hidden'
|
||||
}).catch(() => {
|
||||
// Might already be gone, that's okay
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Logout from the application.
|
||||
*/
|
||||
export async function logout(page: Page): Promise<void> {
|
||||
// Click user profile menu
|
||||
await page.click('[aria-label="User menu"], button:has-text("Profile")');
|
||||
|
||||
// Click logout button
|
||||
await page.click('text="Logout"');
|
||||
|
||||
// Wait for redirect to login page
|
||||
await page.waitForURL(/\/login|\/$/);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if user is logged in by looking for authenticated UI elements.
|
||||
*/
|
||||
export async function isLoggedIn(page: Page): Promise<boolean> {
|
||||
try {
|
||||
// Look for chat interface or user menu
|
||||
await page.waitForSelector('textarea, [aria-label="User menu"]', { timeout: 2000 });
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@@ -1,40 +1,131 @@
|
||||
import { test } from 'magnitude-test';
|
||||
|
||||
test('[Happy Path] User can record voice and see transcript', async (agent) => {
|
||||
// Act: Go to chat page
|
||||
await agent.act('Navigate to /chat');
|
||||
test('[Happy Path] User can have a full voice conversation with AI', async (agent) => {
|
||||
// Act: Navigate to chat page (assumes user is already authenticated)
|
||||
await agent.open('http://localhost:3000/chat');
|
||||
|
||||
// Check: Verify initial state
|
||||
await agent.check('The chat input field is empty');
|
||||
await agent.check('A "Start Recording" button is visible');
|
||||
// Check: Initial state - voice button shows "Start Voice Conversation"
|
||||
await agent.check('A button with text "Start Voice Conversation" is visible');
|
||||
|
||||
// Act: Click the record button
|
||||
// Note: This will require mocking the /api/voice-token response and the
|
||||
// MediaDevices/WebSocket browser APIs in a real test environment
|
||||
await agent.act('Click the "Start Recording" button');
|
||||
// Act: Click to start voice mode
|
||||
await agent.act('Click the "Start Voice Conversation" button');
|
||||
|
||||
// Check: UI updates to recording state
|
||||
await agent.check('A "Stop Recording" button is visible');
|
||||
|
||||
// Act: Simulate receiving a transcript from the (mocked) Deepgram WebSocket
|
||||
await agent.act(
|
||||
'Simulate an interim transcript "Hello world" from the Deepgram WebSocket'
|
||||
// Check: Button text changes to indicate checking or generating state
|
||||
// Could be "Checking for greeting..." or "Generating speech..." or "Listening..."
|
||||
await agent.check(
|
||||
'The button text has changed from "Start Voice Conversation" to indicate an active state'
|
||||
);
|
||||
|
||||
// Check: The input field is updated
|
||||
await agent.check('The chat input field contains "Hello world"');
|
||||
// Act: If there's a Skip button visible (greeting is playing), click it
|
||||
await agent.act('Click the Skip button if it is visible');
|
||||
|
||||
// Act: Simulate a final transcript
|
||||
await agent.act(
|
||||
'Simulate a final transcript "Hello world." from the Deepgram WebSocket'
|
||||
);
|
||||
// Check: Should transition to listening state
|
||||
await agent.check('The button shows "Listening... Start speaking"');
|
||||
|
||||
// Check: The "Stop Recording" button is gone
|
||||
await agent.check('A "Start Recording" button is visible again');
|
||||
// Check: Development test controls should be visible (in dev mode)
|
||||
await agent.check('A section with text "DEV: State Machine Testing" is visible');
|
||||
|
||||
// Check: The chat input is cleared (because it was submitted)
|
||||
await agent.check('The chat input field is empty');
|
||||
// Act: Use dev button to simulate user starting to speak
|
||||
await agent.act('Click the "Simulate Speech" button in the dev controls');
|
||||
|
||||
// Check: The finalized transcript appears as a user message
|
||||
await agent.check('The message "Hello world." appears in the chat list');
|
||||
// Check: Button shows speaking state
|
||||
await agent.check('The button text contains "Speaking"');
|
||||
|
||||
// Act: Add a phrase using the dev button
|
||||
await agent.act('Click the "Add Phrase" button in the dev controls');
|
||||
|
||||
// Check: A message bubble appears showing the transcript being spoken
|
||||
await agent.check('A message with text "You (speaking...)" is visible');
|
||||
await agent.check('The message contains the text "Test message"');
|
||||
|
||||
// Check: Button shows timing out state
|
||||
await agent.check('The button text contains "auto-submit"');
|
||||
|
||||
// Act: Trigger the timeout using dev button
|
||||
await agent.act('Click the "Trigger Timeout" button in the dev controls');
|
||||
|
||||
// Check: Button shows submitting or waiting state
|
||||
await agent.check('The button text contains "Submitting" or "Waiting for AI"');
|
||||
|
||||
// Check: The user message appears in the chat
|
||||
await agent.check('A message with text "Test message" appears in the chat history');
|
||||
|
||||
// Wait for AI response (this takes a few seconds)
|
||||
await agent.wait(10000);
|
||||
|
||||
// Check: AI message appears
|
||||
await agent.check('An AI message appears in the chat');
|
||||
|
||||
// Check: Button shows generating or playing TTS state
|
||||
await agent.check('The button text contains "Generating speech" or "AI is speaking"');
|
||||
|
||||
// Check: Skip button is visible during TTS
|
||||
await agent.check('A "Skip" button is visible');
|
||||
|
||||
// Act: Skip the AI audio
|
||||
await agent.act('Click the Skip button');
|
||||
|
||||
// Check: Returns to listening state
|
||||
await agent.check('The button shows "Listening... Start speaking"');
|
||||
|
||||
// Act: Stop voice mode
|
||||
await agent.act('Click the main voice button to stop');
|
||||
|
||||
// Check: Returns to idle state
|
||||
await agent.check('The button shows "Start Voice Conversation"');
|
||||
});
|
||||
|
||||
test('[Unhappy Path] Voice mode handles errors gracefully', async (agent) => {
|
||||
await agent.open('http://localhost:3000/chat');
|
||||
|
||||
// Act: Start voice mode
|
||||
await agent.act('Click the "Start Voice Conversation" button');
|
||||
|
||||
// Simulate an error scenario (e.g., microphone permission denied)
|
||||
// Note: In a real test, this would involve mocking the getUserMedia API to reject
|
||||
await agent.act('Simulate a microphone permission error');
|
||||
|
||||
// Check: Error message is displayed
|
||||
await agent.check('An error message is shown to the user');
|
||||
|
||||
// Check: Voice mode returns to idle state
|
||||
await agent.check('The button shows "Start Voice Conversation"');
|
||||
});
|
||||
|
||||
test('[Happy Path] Text input is disabled during voice mode', async (agent) => {
|
||||
await agent.open('http://localhost:3000/chat');
|
||||
|
||||
// Check: Text input is enabled initially
|
||||
await agent.check('The text input field "Or type your thoughts here..." is enabled');
|
||||
|
||||
// Act: Start voice mode
|
||||
await agent.act('Click the "Start Voice Conversation" button');
|
||||
|
||||
// Check: Text input is disabled
|
||||
await agent.check('The text input field is disabled');
|
||||
|
||||
// Act: Stop voice mode
|
||||
await agent.act('Click the main voice button to stop');
|
||||
|
||||
// Check: Text input is enabled again
|
||||
await agent.check('The text input field is enabled');
|
||||
});
|
||||
|
||||
test('[Happy Path] User can type a message while voice mode is idle', async (agent) => {
|
||||
await agent.open('http://localhost:3000/chat');
|
||||
|
||||
// Act: Type a message in the text input
|
||||
await agent.act('Type "This is a text message" into the text input field');
|
||||
|
||||
// Act: Submit the message
|
||||
await agent.act('Press Enter or click the Send button');
|
||||
|
||||
// Check: Message appears in chat
|
||||
await agent.check('The message "This is a text message" appears as a user message');
|
||||
|
||||
// Wait for AI response
|
||||
await agent.wait(5000);
|
||||
|
||||
// Check: AI responds
|
||||
await agent.check('An AI response appears in the chat');
|
||||
});
|
||||
|
||||
37
tests/magnitude/cache-success.mag.ts
Normal file
@@ -0,0 +1,37 @@
|
||||
/**
|
||||
* Magnitude Test: Cache Success
|
||||
*
|
||||
* This test verifies that node publishing succeeds with full cache write,
|
||||
* not just a degraded state with warnings.
|
||||
*/
|
||||
|
||||
import { test } from 'magnitude-test';
|
||||
|
||||
test('Node publishes successfully with cache (no warnings)', async (agent) => {
|
||||
await agent.open('http://localhost:3000');
|
||||
|
||||
// Login
|
||||
await agent.act('Click the "Log in with Bluesky" button');
|
||||
await agent.act('Fill in credentials and submit')
|
||||
.data({
|
||||
username: process.env.TEST_BLUESKY_USERNAME || 'test-user.bsky.social',
|
||||
password: process.env.TEST_BLUESKY_PASSWORD || 'test-password',
|
||||
});
|
||||
await agent.check('Logged in successfully');
|
||||
|
||||
// Start conversation
|
||||
await agent.act('Type "Test cache write success" and press Enter');
|
||||
await agent.check('AI responds');
|
||||
|
||||
// Create and publish node
|
||||
await agent.act('Click "Create Node"');
|
||||
await agent.check('On edit page with draft');
|
||||
|
||||
await agent.act('Click "Publish Node"');
|
||||
|
||||
// CRITICAL: Should get green success notification, NOT yellow warning
|
||||
await agent.check('Success notification is GREEN (not yellow warning)');
|
||||
await agent.check('Notification says "Your node has been published to your Bluesky account"');
|
||||
await agent.check('Notification does NOT mention "cache update failed"');
|
||||
await agent.check('Notification does NOT mention "Advanced features may be unavailable"');
|
||||
});
|
||||
220
tests/magnitude/node-publishing.mag.ts
Normal file
@@ -0,0 +1,220 @@
|
||||
/**
|
||||
* Magnitude Tests: Node Publishing Flow
|
||||
*
|
||||
* Tests for the complete node creation, editing, and publishing workflow.
|
||||
* Covers both happy path and error scenarios.
|
||||
*/
|
||||
|
||||
import { test } from 'magnitude-test';
|
||||
|
||||
// ============================================================================
|
||||
// HAPPY PATH TESTS
|
||||
// ============================================================================
|
||||
|
||||
test('User can publish a node from conversation', async (agent) => {
|
||||
await agent.open('http://localhost:3000');
|
||||
|
||||
// Step 1: Login with Bluesky
|
||||
await agent.act('Click the "Log in with Bluesky" button');
|
||||
await agent.check('Redirected to Bluesky login page');
|
||||
|
||||
await agent.act('Fill in username and password')
|
||||
.data({
|
||||
username: process.env.TEST_BLUESKY_USERNAME || 'test-user.bsky.social',
|
||||
password: process.env.TEST_BLUESKY_PASSWORD || 'test-password',
|
||||
});
|
||||
|
||||
await agent.act('Click the login submit button');
|
||||
await agent.check('Redirected back to app and logged in');
|
||||
await agent.check('Chat interface is visible');
|
||||
|
||||
// Step 2: Start a conversation
|
||||
await agent.act('Type "Let\'s discuss the philosophy of decentralized social networks" into the chat input and press Enter');
|
||||
await agent.check('Message appears in chat');
|
||||
await agent.check('AI response appears');
|
||||
|
||||
// Step 3: Create node draft
|
||||
await agent.act('Click the "Create Node" button');
|
||||
await agent.check('Navigated to edit page');
|
||||
await agent.check('Title input has AI-generated content');
|
||||
await agent.check('Content textarea has AI-generated content');
|
||||
await agent.check('Conversation context is visible at the bottom');
|
||||
|
||||
// Step 4: Publish the node
|
||||
await agent.act('Click the "Publish Node" button');
|
||||
await agent.check('Success notification appears with "Node published!"');
|
||||
await agent.check('Returned to conversation view');
|
||||
});
|
||||
|
||||
test('User can edit node draft before publishing', async (agent) => {
|
||||
// Assumes user is already logged in from previous test
|
||||
await agent.open('http://localhost:3000/chat');
|
||||
|
||||
// Start conversation
|
||||
await agent.act('Type "Testing the edit flow" and press Enter');
|
||||
await agent.check('AI responds');
|
||||
|
||||
// Create draft
|
||||
await agent.act('Click "Create Node"');
|
||||
await agent.check('On edit page with draft content');
|
||||
|
||||
// Edit the content
|
||||
await agent.act('Clear the title input and type "My Custom Title"');
|
||||
await agent.act('Modify the content textarea to add "This is my edited content."');
|
||||
|
||||
await agent.check('Title shows "My Custom Title"');
|
||||
await agent.check('Content includes "This is my edited content."');
|
||||
|
||||
// Publish
|
||||
await agent.act('Click "Publish Node"');
|
||||
await agent.check('Success notification appears');
|
||||
});
|
||||
|
||||
test('User can cancel node draft without publishing', async (agent) => {
|
||||
await agent.open('http://localhost:3000/chat');
|
||||
|
||||
// Start conversation
|
||||
await agent.act('Type "Test cancellation" and press Enter');
|
||||
await agent.check('AI responds');
|
||||
|
||||
// Create draft
|
||||
await agent.act('Click "Create Node"');
|
||||
await agent.check('On edit page');
|
||||
|
||||
// Cancel instead of publishing
|
||||
await agent.act('Click the "Cancel" button');
|
||||
await agent.check('Returned to conversation view');
|
||||
await agent.check('Draft was not published'); // Verify no success notification
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// UNHAPPY PATH TESTS
|
||||
// ============================================================================
|
||||
|
||||
test('Cannot publish node without authentication', async (agent) => {
|
||||
// Open edit page directly without being logged in
|
||||
await agent.open('http://localhost:3000/edit');
|
||||
|
||||
await agent.check('Shows empty state message');
|
||||
await agent.check('Message says "No Node Draft"');
|
||||
await agent.check('Suggests to start a conversation');
|
||||
});
|
||||
|
||||
test('Cannot publish node with empty title', async (agent) => {
|
||||
await agent.open('http://localhost:3000/chat');
|
||||
|
||||
// Create draft
|
||||
await agent.act('Type "Test empty title validation" and press Enter');
|
||||
await agent.check('AI responds');
|
||||
await agent.act('Click "Create Node"');
|
||||
await agent.check('On edit page');
|
||||
|
||||
// Clear the title
|
||||
await agent.act('Clear the title input completely');
|
||||
|
||||
await agent.check('Publish button is disabled');
|
||||
});
|
||||
|
||||
test('Cannot publish node with empty content', async (agent) => {
|
||||
await agent.open('http://localhost:3000/chat');
|
||||
|
||||
// Create draft
|
||||
await agent.act('Type "Test empty content validation" and press Enter');
|
||||
await agent.check('AI responds');
|
||||
await agent.act('Click "Create Node"');
|
||||
await agent.check('On edit page');
|
||||
|
||||
// Clear the content
|
||||
await agent.act('Clear the content textarea completely');
|
||||
|
||||
await agent.check('Publish button is disabled');
|
||||
});
|
||||
|
||||
test('Shows error notification if publish fails', async (agent) => {
|
||||
await agent.open('http://localhost:3000/chat');
|
||||
|
||||
// Create draft
|
||||
await agent.act('Type "Test error handling" and press Enter');
|
||||
await agent.check('AI responds');
|
||||
await agent.act('Click "Create Node"');
|
||||
await agent.check('On edit page');
|
||||
|
||||
// Simulate network failure by disconnecting (this is a mock scenario)
|
||||
// In real test, this would require mocking the API
|
||||
await agent.act('Click "Publish Node"');
|
||||
|
||||
// If there's a network error, should see error notification
|
||||
// Note: This test may need to mock the fetch call to force an error
|
||||
await agent.check('Either success or error notification appears');
|
||||
});
|
||||
|
||||
test('Handles long content with truncation', async (agent) => {
|
||||
await agent.open('http://localhost:3000/chat');
|
||||
|
||||
// Create a very long message
|
||||
const longMessage = 'A'.repeat(500) + ' This is a test of long content truncation for Bluesky posts.';
|
||||
|
||||
await agent.act(`Type "${longMessage}" and press Enter`);
|
||||
await agent.check('AI responds');
|
||||
|
||||
await agent.act('Click "Create Node"');
|
||||
await agent.check('On edit page');
|
||||
|
||||
await agent.act('Click "Publish Node"');
|
||||
|
||||
// Should still publish successfully (with truncation)
|
||||
await agent.check('Success notification appears');
|
||||
await agent.check('May show warning about cache or truncation');
|
||||
});
|
||||
|
||||
test('Shows warning when cache fails but publish succeeds', async (agent) => {
|
||||
await agent.open('http://localhost:3000/chat');
|
||||
|
||||
await agent.act('Type "Test cache failure graceful degradation" and press Enter');
|
||||
await agent.check('AI responds');
|
||||
|
||||
await agent.act('Click "Create Node"');
|
||||
await agent.check('On edit page');
|
||||
|
||||
await agent.act('Click "Publish Node"');
|
||||
|
||||
// The system should succeed even if cache/embeddings fail
|
||||
await agent.check('Success notification appears');
|
||||
// May show yellow warning notification instead of green success
|
||||
await agent.check('Notification says "Node published"');
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// INTEGRATION TESTS
|
||||
// ============================================================================
|
||||
|
||||
test('Complete user journey: Login → Converse → Publish → View', async (agent) => {
|
||||
// Full end-to-end test
|
||||
await agent.open('http://localhost:3000');
|
||||
|
||||
// Login
|
||||
await agent.act('Login with Bluesky')
|
||||
.data({
|
||||
username: process.env.TEST_BLUESKY_USERNAME,
|
||||
password: process.env.TEST_BLUESKY_PASSWORD,
|
||||
});
|
||||
await agent.check('Logged in successfully');
|
||||
|
||||
// Have a meaningful conversation
|
||||
await agent.act('Type "I want to explore the concept of digital gardens" and send');
|
||||
await agent.check('AI responds with insights');
|
||||
|
||||
await agent.act('Reply with "How do digital gardens differ from blogs?"');
|
||||
await agent.check('AI provides detailed explanation');
|
||||
|
||||
// Create and publish
|
||||
await agent.act('Click "Create Node"');
|
||||
await agent.check('Draft generated from conversation');
|
||||
|
||||
await agent.act('Review the draft and click "Publish Node"');
|
||||
await agent.check('Node published successfully');
|
||||
|
||||
// Verify we can continue the conversation
|
||||
await agent.check('Back in conversation view');
|
||||
await agent.check('Can type new messages');
|
||||
});
|
||||
143
tests/voice-mode.spec.ts
Normal file
@@ -0,0 +1,143 @@
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
test.describe('Voice Mode', () => {
|
||||
test.beforeEach(async ({ page }) => {
|
||||
// Navigate to chat page (should be authenticated via setup)
|
||||
await page.goto('/chat');
|
||||
await expect(page.getByText('Ponderants Interview')).toBeVisible();
|
||||
});
|
||||
|
||||
test('should start voice conversation and display correct button text', async ({ page }) => {
|
||||
// Initial state - button should show "Start Voice Conversation"
|
||||
const voiceButton = page.getByRole('button', { name: /Start Voice Conversation/i });
|
||||
await expect(voiceButton).toBeVisible();
|
||||
|
||||
// Click to start voice mode
|
||||
await voiceButton.click();
|
||||
|
||||
// Button should transition to one of the active states
|
||||
// Could be "Generating speech..." if there's a greeting, or "Listening..." if no greeting
|
||||
await expect(page.getByRole('button', { name: /Generating speech|Listening|Checking for greeting/i })).toBeVisible({
|
||||
timeout: 5000,
|
||||
});
|
||||
});
|
||||
|
||||
test('should skip audio during generation and transition to listening', async ({ page }) => {
|
||||
// Start voice mode
|
||||
const voiceButton = page.getByRole('button', { name: /Start Voice Conversation/i });
|
||||
await voiceButton.click();
|
||||
|
||||
// Wait for generation or playing state
|
||||
await expect(page.getByRole('button', { name: /Generating speech|AI is speaking/i })).toBeVisible({
|
||||
timeout: 5000,
|
||||
});
|
||||
|
||||
// Skip button should be visible
|
||||
const skipButton = page.getByRole('button', { name: /Skip/i });
|
||||
await expect(skipButton).toBeVisible();
|
||||
|
||||
// Click skip
|
||||
await skipButton.click();
|
||||
|
||||
// Should transition to listening state
|
||||
await expect(page.getByRole('button', { name: /Listening/i })).toBeVisible({ timeout: 3000 });
|
||||
});
|
||||
|
||||
test('should use test buttons to simulate full conversation flow', async ({ page }) => {
|
||||
// Start voice mode
|
||||
await page.getByRole('button', { name: /Start Voice Conversation/i }).click();
|
||||
|
||||
// Wait for initial state (could be checking, generating, or listening)
|
||||
await page.waitForTimeout(1000);
|
||||
|
||||
// If there's a skip button (greeting is playing), click it
|
||||
const skipButton = page.getByRole('button', { name: /Skip/i });
|
||||
if (await skipButton.isVisible()) {
|
||||
await skipButton.click();
|
||||
}
|
||||
|
||||
// Should eventually reach listening state
|
||||
await expect(page.getByRole('button', { name: /Listening/i })).toBeVisible({ timeout: 5000 });
|
||||
|
||||
// In development mode, test buttons should be visible
|
||||
const isDevelopment = process.env.NODE_ENV !== 'production';
|
||||
if (isDevelopment) {
|
||||
// Click "Simulate User Speech" test button
|
||||
const simulateSpeechButton = page.getByRole('button', { name: /Simulate Speech/i });
|
||||
await expect(simulateSpeechButton).toBeVisible();
|
||||
await simulateSpeechButton.click();
|
||||
|
||||
// Should transition to userSpeaking state
|
||||
await expect(page.getByRole('button', { name: /Speaking/i })).toBeVisible({ timeout: 2000 });
|
||||
|
||||
// Add a phrase using test button
|
||||
const addPhraseButton = page.getByRole('button', { name: /Add Phrase/i });
|
||||
await addPhraseButton.click();
|
||||
|
||||
// Should be in timingOut state
|
||||
await expect(page.getByRole('button', { name: /auto-submit/i })).toBeVisible({ timeout: 2000 });
|
||||
|
||||
// Trigger timeout using test button
|
||||
const triggerTimeoutButton = page.getByRole('button', { name: /Trigger Timeout/i });
|
||||
await triggerTimeoutButton.click();
|
||||
|
||||
// Should submit and wait for AI
|
||||
await expect(page.getByRole('button', { name: /Submitting|Waiting for AI/i })).toBeVisible({
|
||||
timeout: 2000,
|
||||
});
|
||||
|
||||
// Wait for AI response (this will take a few seconds)
|
||||
await expect(page.getByRole('button', { name: /Generating speech|AI is speaking/i })).toBeVisible({
|
||||
timeout: 15000,
|
||||
});
|
||||
|
||||
// Skip the AI audio
|
||||
const skipAudioButton = page.getByRole('button', { name: /Skip/i });
|
||||
if (await skipAudioButton.isVisible()) {
|
||||
await skipAudioButton.click();
|
||||
}
|
||||
|
||||
// Should return to listening
|
||||
await expect(page.getByRole('button', { name: /Listening/i })).toBeVisible({ timeout: 3000 });
|
||||
}
|
||||
});
|
||||
|
||||
test('should stop voice mode and return to idle', async ({ page }) => {
|
||||
// Start voice mode
|
||||
const voiceButton = page.getByRole('button', { name: /Start Voice Conversation/i });
|
||||
await voiceButton.click();
|
||||
|
||||
// Wait for active state
|
||||
await page.waitForTimeout(1000);
|
||||
|
||||
// Click the button again to stop
|
||||
await page.getByRole('button', { name: /Listening|Speaking|Generating|AI is speaking/i }).click();
|
||||
|
||||
// Should return to idle state
|
||||
await expect(page.getByRole('button', { name: /Start Voice Conversation/i })).toBeVisible({
|
||||
timeout: 2000,
|
||||
});
|
||||
});
|
||||
|
||||
test('should disable text input while voice mode is active', async ({ page }) => {
|
||||
const textInput = page.getByPlaceholder(/type your thoughts here/i);
|
||||
|
||||
// Text input should be enabled initially
|
||||
await expect(textInput).toBeEnabled();
|
||||
|
||||
// Start voice mode
|
||||
await page.getByRole('button', { name: /Start Voice Conversation/i }).click();
|
||||
|
||||
// Wait for voice mode to activate
|
||||
await page.waitForTimeout(1000);
|
||||
|
||||
// Text input should be disabled
|
||||
await expect(textInput).toBeDisabled();
|
||||
|
||||
// Stop voice mode
|
||||
await page.getByRole('button', { name: /Listening|Speaking|Generating|AI is speaking/i }).click();
|
||||
|
||||
// Text input should be enabled again
|
||||
await expect(textInput).toBeEnabled();
|
||||
});
|
||||
});
|
||||
12
todo.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# TODO
|
||||
|
||||
Upcoming items that should be implemented (time-permitting):
|
||||
|
||||
- a way to see the visualized version of all nodes in the db
|
||||
- let's call the "AI" "Mr. DJ" and link to this youtube video for its name:
|
||||
https://www.youtube.com/watch?v=oEauWw9ZGrA
|
||||
- let's have "Ponderants" in the top-left corner with some sort of very minimal
|
||||
svg that represents an abstraction of a human conversing with a robot (like
|
||||
maybe four simple shapes max)
|
||||
- let's have, in the top-center, something that indicates we're in "Convo" mode
|
||||
- let's stream the AI output to deepgram for faster synthesis
|
||||