Files
app/components/ChatInterface.tsx
Albert c2f2d10ee1 feat: Step 7 & 9 - AI Chat + Voice client integration
Implement AI-powered chat interface with voice input capabilities.

Step 7 (Chat Interface):
- Create ChatInterface component with Vercel AI SDK useChat hook
- Create /api/chat route using Google Gemini (gemini-1.5-flash)
- Implement thoughtful interviewer system prompt
- Add real-time message streaming
- Auto-scroll to latest messages

Step 9 (Voice Client):
- Create MicrophoneRecorder component
- Integrate real-time voice transcription via Deepgram
- Direct WebSocket connection using temporary tokens
- Real-time transcript display in chat input
- Auto-submit on speech_final event
- Add @tabler/icons-react for microphone icons

Architecture:
- Client requests temporary Deepgram token from /api/voice-token
- MediaRecorder captures audio in 250ms chunks
- WebSocket sends audio directly to Deepgram
- Transcripts update chat input in real-time
- Final transcript auto-submits to AI chat

Security:
- Deepgram API key never exposed to client
- Temporary tokens expire in 60 seconds
- Chat requires authentication via SurrealDB JWT

Testing:
- Add magnitude test for voice recording flow
- Tests cover happy path with mocked WebSocket

Known Issue:
- Page compilation needs debugging (useChat import path verified)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-09 00:27:40 +00:00

109 lines
3.1 KiB
TypeScript

'use client';
import { useChat } from 'ai';
import { Container, ScrollArea, Paper, Group, TextInput, Button, Stack, Text, Box } from '@mantine/core';
import { useEffect, useRef } from 'react';
import { MicrophoneRecorder } from './MicrophoneRecorder';
export function ChatInterface() {
const viewport = useRef<HTMLDivElement>(null);
const {
messages,
input,
handleInputChange,
handleSubmit,
setInput,
isLoading,
} = useChat({
api: '/api/chat',
});
// Auto-scroll to bottom when new messages arrive
useEffect(() => {
if (viewport.current) {
viewport.current.scrollTo({
top: viewport.current.scrollHeight,
behavior: 'smooth',
});
}
}, [messages]);
return (
<Container size="md" h="100vh" style={{ display: 'flex', flexDirection: 'column' }}>
<Stack h="100%" gap="md" py="md">
{/* Chat messages area */}
<ScrollArea
flex={1}
type="auto"
viewportRef={viewport}
>
<Stack gap="md">
{messages.length === 0 && (
<Text c="dimmed" ta="center" mt="xl">
Start a conversation by typing or speaking...
</Text>
)}
{messages.map((message) => (
<Box
key={message.id}
style={{
alignSelf: message.role === 'user' ? 'flex-end' : 'flex-start',
maxWidth: '70%',
}}
>
<Paper
p="sm"
radius="md"
bg={message.role === 'user' ? 'dark.6' : 'dark.7'}
>
<Text size="sm">{message.content}</Text>
</Paper>
</Box>
))}
</Stack>
</ScrollArea>
{/* Input area */}
<form onSubmit={handleSubmit}>
<Paper withBorder p="sm" radius="xl">
<Group gap="xs">
<TextInput
value={input}
onChange={handleInputChange}
placeholder="Speak or type your thoughts..."
style={{ flex: 1 }}
variant="unstyled"
disabled={isLoading}
/>
{/* Microphone Recorder */}
<MicrophoneRecorder
onTranscriptUpdate={(transcript) => {
// Update the input field in real-time
setInput(transcript);
}}
onTranscriptFinalized={(transcript) => {
// Set the input and submit
setInput(transcript);
// Trigger form submission
setTimeout(() => {
const form = document.querySelector('form');
if (form) {
form.requestSubmit();
}
}, 100);
}}
/>
<Button type="submit" radius="xl" loading={isLoading}>
Send
</Button>
</Group>
</Paper>
</form>
</Stack>
</Container>
);
}