Stress Less platform Help

StressLess Android MVP - Cursor AI Rules

1. Project Setup & Privacy

1.1 Privacy Mode Configuration

# MANDATORY: Enable Privacy Mode before opening StressLess project # Settings → Privacy Mode → Enable # Verify "Don't send code to AI" is checked # Never expose encryption keys, model weights, or user voice data in prompts

1.2 Secure Development Practices

// ❌ NEVER: Expose sensitive data in prompts // "Fix this encryption using key: StressLessSecureKey2025" // "Debug this model with weights: [actual model data]" // ✅ ALWAYS: Use placeholders and abstractions // "Optimize encryption implementation with proper key management" // "Improve model loading with error handling and validation"

1.3 GDPR Compliance Enforcement

// Every AI-generated data handling method must include: // - Privacy-by-design patterns // - Data minimization principles // - User consent validation // - Export/deletion capabilities // - Local-only processing confirmation

2. TDD & Gherkin Integration

2.1 Test-First Development

# For every feature request, follow this sequence: 1. "Create Gherkin scenarios for [feature] in features/[feature_name].feature" 2. "Generate step definitions for the scenarios" 3. "Implement failing tests based on step definitions" 4. "Write minimal code to make tests pass" 5. "Refactor while maintaining test coverage"

2.2 Gherkin Scenario Creation

# When creating new features, always start with Gherkin: # "Create comprehensive Gherkin scenarios for voice recording with the following requirements: # - 30 second maximum recording time # - Real-time audio quality feedback # - Cancel/stop functionality # - Error handling for permission denied # - Performance validation (<3s processing)"

2.3 Step Definition Standards

// All generated step definitions must follow this pattern: @Given("I am on the home screen") fun iAmOnTheHomeScreen() { composeTestRule .onNodeWithTag("home_screen") .assertIsDisplayed() } // Include proper test tags and assertions // Use descriptive method names // Add error handling for flaky tests

3. Project Context & Architecture

3.1 Essential Files Context

# ALWAYS open these files for better AI context: - StressLessApp.kt (Application entry point) - AppModule.kt (Hilt dependency injection) - ECAPAStressEngine.kt (ML inference engine) - StressAnalysisManager.kt (Business orchestrator) - AudioPipeline.kt (Audio preprocessing) - LocalStressRepository.kt (Data persistence) - VoiceCheckViewModel.kt (UI state management) - Navigation.kt (App routing)

3.2 Architecture Layer Specification

// Always specify the layer when requesting changes: // "Update the ML layer to add NPU fallback in ECAPAStressEngine" // "Modify the UI layer to improve voice recording feedback in VoiceCheckScreen" // "Enhance the data layer to support stress trend calculations in LocalStressRepository" // "Optimize the domain layer business logic in StressAnalysisManager"

3.3 Modular Code Organization

// Maintain clean architecture separation: // - UI layer: Compose screens, ViewModels, navigation // - Domain layer: Use cases, business logic, models // - Data layer: Repositories, DAOs, entities // - ML layer: Model loading, inference, audio processing

4. Code Quality & Standards

4.1 Kotlin Coding Standards

// Enforce consistent code style in all AI generation: class StressAnalysisManager @Inject constructor( private val ecapaEngine: ECAPAStressEngine, private val audioPipeline: AudioPipeline, private val repository: LocalStressRepository ) { suspend fun analyzeStress(audioData: List<ByteArray>): StressAnalysisResult = withContext(Dispatchers.Default) { // Implementation with proper error handling } }

4.2 Error Handling Patterns

// MANDATORY: Include comprehensive error handling in all generated code try { val result = performRiskyOperation() return result } catch (e: SpecificException) { Log.e(TAG, "Specific error occurred", e) throw DomainSpecificException("User-friendly message", e) } catch (e: Exception) { Log.e(TAG, "Unexpected error", e) throw UnexpectedAnalysisException("Operation failed unexpectedly", e) }

4.3 Coroutines Best Practices

// All async operations must use proper coroutine patterns: suspend fun processAudioData(data: ByteArray): FloatArray = withContext(Dispatchers.Default) { // CPU-intensive work on Default dispatcher } suspend fun saveToDatabase(result: StressAnalysisResult): Unit = withContext(Dispatchers.IO) { // IO operations on IO dispatcher } // Always specify the appropriate dispatcher // Use structured concurrency // Handle cancellation properly

4.4 Memory Management

// Include memory-conscious patterns: class AudioProcessor { private val bufferPool = AudioBufferPool() suspend fun process(data: ByteArray): FloatArray { val buffer = bufferPool.acquire() try { // Process data return processedResult } finally { bufferPool.release(buffer) } } }

4.5 Logging Standards

// MANDATORY: Use structured logging; DO NOT use println/System.out // ✅ ALWAYS USE: Timber for all logs in app and tests // Examples: Timber.d("Initializing ECAPA engine…") Timber.i("Models downloaded: %s", result) Timber.w("NNAPI delegate not available: %s", e.message) Timber.e(e, "Failed to load model: %s", modelName) // ❌ NEVER USE: println("debug…") System.out.println("…") System.err.println("…") // Privacy/Security: // - Never log sensitive data (audio bytes, embeddings, tokens) // - Prefer formatted strings with placeholders over string concatenation // - Use appropriate log level (d/i/w/e) and short, actionable messages

5. ML/AI Specific Rules

5.1 Model Loading Standards

// Every model loading method must include: // - Checksum verification // - NPU delegate fallback logic // - Memory management // - Performance timing // - Error recovery mechanisms suspend fun loadModel(modelPath: String): Interpreter = withContext(Dispatchers.IO) { try { validateModelChecksum(modelPath) val delegate = npuDelegate.getOptimalDelegate() val options = Interpreter.Options().addDelegate(delegate) return@withContext Interpreter(loadModelBuffer(modelPath), options) } catch (e: Exception) { Log.e(TAG, "Model loading failed", e) throw ModelLoadingException("Failed to load $modelPath", e) } }

5.2 Inference Performance Rules

// All inference code must meet performance targets: // - Total analysis time: <3 seconds // - Memory usage: <200MB peak // - Battery impact: <2% per analysis // - Include performance monitoring suspend fun performInference(input: FloatArray): StressClassification { val session = performanceMonitor.startTiming("stress_classification") try { val result = classifier.classify(input) session.end() if (session.durationMs > 3000) { Log.w(TAG, "Inference exceeded 3s target: ${session.durationMs}ms") } return result } catch (e: Exception) { session.end() throw e } }

5.3 Audio Processing Standards

// Audio processing must include: // - Quality validation // - Noise filtering // - Proper sampling rate handling // - Buffer management // - Real-time feedback fun processAudioChunk(chunk: ByteArray): AudioProcessingResult { val quality = validateAudioQuality(chunk) if (quality == AudioQuality.POOR) { return AudioProcessingResult.QualityTooLow( "Try recording in a quieter environment" ) } val processed = applyNoiseReduction(chunk) val features = extractMFCCFeatures(processed) return AudioProcessingResult.Success(features, quality) }

5.4 Real Audio Sample Requirements

// MANDATORY: Use ONLY real audio samples from CREMA-D dataset // ❌ NEVER GENERATE: Synthetic audio, placeholder samples, or artificial data // ✅ ALWAYS USE: Real WAV samples from app/src/main/res/raw/ private fun loadWavSample(filename: String): ByteArray { // Load WAV file from raw resources - only real samples val resourceId = getResourceId(filename) val inputStream = context.resources.openRawResource(resourceId) return inputStream.readBytes() } // Required WAV samples (all must be present): // - sample_angry.wav (medium stress) // - sample_angry_high.wav (high stress) // - sample_happy.wav (low stress) // - sample_neutral.wav (neutral) // - sample_sad.wav (medium stress) // Test validation must fail if any real samples are missing assertTrue("All WAV samples must be available", allSamplesPresent)

5.5 Android Compatibility Standards

5.6 Placeholder Models Prohibition

// MANDATORY: Do not generate or use placeholder ML model files // ❌ NEVER: Create fake/filled ByteArray ".litert/.tflite" files for testing // ✅ ALWAYS: Download and validate real LiteRT/TFLite models (identifier at bytes 4..7 = "TFL3") // ✅ If a local model exists but fails validation, delete and re-download
// MANDATORY: All code must be Android-compatible // ❌ NEVER USE: Shell scripts, Python scripts, or non-Android tools // ✅ ALWAYS USE: Android testing frameworks and tools // Use Android test runners: @RunWith(AndroidJUnit4::class) class AndroidIntegrationTest { @Test fun testWithRealAndroidContext() { val context = ApplicationProvider.getApplicationContext() // Test with real Android context } } // Use Android-specific testing: - androidx.test.core.app.ApplicationProvider - androidx.test.ext.junit.runners.AndroidJUnit4 - androidx.test.platform.app.InstrumentationRegistry - Real Android Context, not mocked

6. UI/UX Development Rules

6.1 Jetpack Compose Patterns

// Enforce consistent Compose architecture: @Composable fun VoiceCheckScreen( onNavigateBack: () -> Unit, onNavigateToResult: (String) -> Unit, viewModel: VoiceCheckViewModel = hiltViewModel() ) { val uiState by viewModel.uiState.collectAsState() LaunchedEffect(uiState.analysisComplete) { if (uiState.analysisComplete && uiState.resultId != null) { onNavigateToResult(uiState.resultId) } } // UI implementation with proper state management }

6.2 Accessibility Requirements

// ALL UI components must include accessibility support: Button( onClick = onStartRecording, modifier = Modifier .testTag("voice_check_button") .semantics { contentDescription = "Start voice stress analysis recording" role = Role.Button } ) { Icon( imageVector = Icons.Default.Mic, contentDescription = null, // Already described by button modifier = Modifier.size(24.dp) ) Spacer(Modifier.width(8.dp)) Text("Start Recording") }

6.3 Material 3 Compliance

// Use Material 3 design system consistently: @Composable fun StressLevelCard(stressLevel: Int, confidence: Float) { Card( colors = CardDefaults.cardColors( containerColor = MaterialTheme.colorScheme.surfaceVariant ), elevation = CardDefaults.cardElevation(defaultElevation = 6.dp) ) { Column( modifier = Modifier.padding(16.dp), horizontalAlignment = Alignment.CenterHorizontally ) { Text( text = "$stressLevel/10", style = MaterialTheme.typography.displayMedium, color = getStressLevelColor(stressLevel) ) } } }

6.4 Real-time UI Updates

// Implement reactive UI patterns: @Composable fun AudioLevelVisualizer(audioLevel: Float) { val animatedLevel by animateFloatAsState( targetValue = audioLevel, animationSpec = tween(durationMillis = 100) ) Canvas(modifier = Modifier.fillMaxWidth().height(120.dp)) { // Draw waveform based on animatedLevel } }

7. Testing Strategy & Rules

7.1 Test Coverage Requirements

// Generate tests for all new functionality: // - Unit tests for business logic (>90% coverage) // - Integration tests for ML pipeline // - UI tests for critical user flows // - Performance tests for inference timing // - Gherkin scenario validation @Test fun `analyzeStress should return valid result within performance target`() = runTest { // Given val audioData = generateTestAudioData() // When val startTime = System.currentTimeMillis() val result = stressAnalysisManager.performStressAnalysis(audioData) val endTime = System.currentTimeMillis() // Then assertTrue("Analysis should complete within 3s", (endTime - startTime) < 3000) assertTrue("Stress level should be 1-10", result.stressLevel in 1..10) assertTrue("Confidence should be 0-1", result.confidence in 0f..1f) }

7.2 Mock Usage Standards

// Use consistent mocking patterns: @Test fun `voice check should handle microphone permission denial`() = runTest { // Given coEvery { audioRecorder.hasPermission() } returns false // When viewModel.startRecording() // Then val uiState = viewModel.uiState.value assertTrue("Should show permission error", uiState.errorMessage?.contains("permission") == true) assertFalse("Should not be recording", uiState.isRecording) coVerify { audioRecorder.hasPermission() } coVerify(exactly = 0) { audioRecorder.startRecording(any()) } }

7.3 Integration Test Patterns

// Test complete user journeys: @Test fun `complete stress analysis flow should work end-to-end`() = runTest { // Given val testAudioData = loadTestAudioFromAssets() // When - Simulate complete user flow val analysisResult = stressAnalysisManager.performStressAnalysis(testAudioData) // Then - Verify all components worked together assertNotNull("Result should be generated", analysisResult) assertTrue("Result should be saved", repository.getAssessmentById(analysisResult.id) != null) assertTrue("Recommendations should be generated", analysisResult.recommendations.isNotEmpty()) }

7.4 Real Audio Sample Testing Requirements

// MANDATORY: All audio testing must use real WAV samples // ❌ NEVER USE: generateTestAudioData(), createPlaceholderAudioSample(), or synthetic audio // ✅ ALWAYS USE: Real WAV samples from CREMA-D dataset @Test fun `test stress detection with real audio samples`() = runBlocking { // Load real WAV samples from CREMA-D dataset val audioSamples = createRealisticAudioSamples() for ((sampleName, audioData) in audioSamples) { val result = ecapaStressEngine.analyzeStress(audioData) // Validate expected results based on real sample characteristics when (sampleName) { "Angry Speech (High Stress)" -> { assertTrue("Angry speech should have high stress level", result.stressLevel >= 7) } "Happy Speech (Low Stress)" -> { assertTrue("Happy speech should have low stress level", result.stressLevel <= 4) } "Neutral Speech" -> { assertTrue("Neutral speech should have moderate stress level", result.stressLevel in 3..6) } } } } private fun createRealisticAudioSamples(): Map<String, ByteArray> { val samples = mutableMapOf<String, ByteArray>() // Use ONLY real WAV samples from CREMA-D dataset samples["Angry Speech (High Stress)"] = loadWavSample("sample_angry_high.wav") samples["Angry Speech (Medium Stress)"] = loadWavSample("sample_angry.wav") samples["Happy Speech (Low Stress)"] = loadWavSample("sample_happy.wav") samples["Neutral Speech"] = loadWavSample("sample_neutral.wav") samples["Sad Speech (Medium Stress)"] = loadWavSample("sample_sad.wav") return samples }

7.5 Android Test Environment Requirements

// MANDATORY: Use Android test runners and real Android context // ❌ NEVER USE: Shell scripts, Python scripts, or non-Android tools // ✅ ALWAYS USE: Android instrumentation tests @RunWith(AndroidJUnit4::class) class StressDetectionAndroidIntegrationTest { private lateinit var context: Context @Before fun setUp() { // Use real Android context for testing context = ApplicationProvider.getApplicationContext() } @Test fun verifyTestEnvironment() { // Verify all required WAV samples are present val wavSamples = listOf( "sample_angry.wav", "sample_happy.wav", "sample_neutral.wav", "sample_sad.wav", "sample_angry_high.wav" ) for (sample in wavSamples) { try { val resourceId = getResourceId(sample) val inputStream = context.resources.openRawResource(resourceId) inputStream.close() println("✅ WAV sample: $sample") } catch (e: Exception) { fail("Required WAV sample not found: $sample") } } } }

8. Performance Optimization Rules

8.1 Performance Targets

// All AI-generated code must meet these targets: // - Voice analysis: <3 seconds end-to-end // - Memory usage: <200MB peak // - Battery impact: <2% per analysis // - App startup: <2 seconds to home screen // - Model loading: <1 second after app start // Include performance monitoring in generated code: class PerformanceAwareComponent @Inject constructor( private val performanceMonitor: PerformanceMonitor ) { suspend fun performOperation(): Result { val session = performanceMonitor.startTiming("operation_name") try { val result = doWork() session.endWithResult(result) return result } catch (e: Exception) { session.end() throw e } } }

8.2 Memory Management

// Implement memory-efficient patterns: class AudioDataProcessor @Inject constructor( private val bufferPool: AudioBufferPool ) { suspend fun processLargeAudioFile(audioData: ByteArray): ProcessingResult { val chunks = audioData.chunked(CHUNK_SIZE) val results = mutableListOf<ChunkResult>() chunks.forEach { chunk -> val buffer = bufferPool.acquire() try { val processed = processChunk(chunk, buffer) results.add(processed) } finally { bufferPool.release(buffer) } } return combineResults(results) } }

8.3 Battery Optimization

// Include battery-aware processing: suspend fun performAnalysisWithBatteryOptimization( audioData: ByteArray ): StressAnalysisResult { val batteryRecommendation = batteryOptimizer.shouldPerformAnalysis() return when (batteryRecommendation) { AnalysisRecommendation.USE_NPU_FULL -> performFullAnalysis(audioData) AnalysisRecommendation.USE_BALANCED_MODE -> performBalancedAnalysis(audioData) AnalysisRecommendation.USE_CPU_ONLY -> performCPUOnlyAnalysis(audioData) AnalysisRecommendation.SKIP_ANALYSIS -> throw LowBatteryException("Analysis skipped to preserve battery") } }

9. Security & Privacy Rules

9.1 Data Protection Standards

// All data handling must include encryption: class SecureDataHandler @Inject constructor( private val encryptionManager: EncryptionManager ) { suspend fun saveUserData(data: SensitiveData): String { val encryptedData = encryptionManager.encrypt(data.toJson()) val dataId = UUID.randomUUID().toString() // Save only encrypted data repository.saveEncrypted(dataId, encryptedData) // Never log sensitive data Log.d(TAG, "Saved user data with ID: $dataId") return dataId } }

9.2 Local Processing Validation

// Verify all processing remains local: class StressAnalysisManager @Inject constructor() { suspend fun analyzeStress(audioData: ByteArray): StressAnalysisResult { // VERIFY: No network calls in this method or dependencies require(!hasNetworkDependencies()) { "Analysis must remain completely offline" } val result = performLocalAnalysis(audioData) // VERIFY: No sensitive data in logs Log.i(TAG, "Analysis completed with level ${result.stressLevel}/10") // Never log: audioData, user voice patterns, raw embeddings return result } }

9.3 GDPR Compliance Implementation

// Include data subject rights: interface GDPRCompliantRepository { suspend fun exportUserData(userId: String): String // Right to access suspend fun deleteUserData(userId: String): Boolean // Right to erasure suspend fun updateUserConsent(userId: String, consent: ConsentRecord): Boolean // Consent management suspend fun getDataRetentionInfo(): DataRetentionInfo // Transparency }

10. Prompt Engineering Best Practices

10.1 Specific Technical Instructions

# ✅ EXCELLENT: Detailed, context-aware prompts "In ECAPAStressEngine.kt, add NPU delegate fallback logic that: 1. Detects available NPU capabilities using device hardware info 2. Falls back to GPU delegate if NPU initialization fails 3. Falls back to CPU delegate if GPU fails 4. Logs the selected delegate and performance characteristics 5. Maintains <3s inference time target across all delegates 6. Includes proper error handling and resource cleanup" # ❌ POOR: Vague, non-specific prompts "Make the AI better" "Fix the performance issues" "Add more features"

10.2 Multi-Component Integration

# Example of comprehensive multi-file changes: "Update the voice recording flow to provide real-time feedback: 1. In AudioRecorder.kt: Add StateFlow<Float> for audio level monitoring 2. In AudioPipeline.kt: Add real-time audio quality assessment 3. In VoiceCheckViewModel.kt: Combine audio level and quality into UI state 4. In VoiceCheckScreen.kt: Add waveform visualization and quality indicators 5. Add corresponding unit tests for each component 6. Update the voice_check.feature Gherkin scenarios to include quality feedback testing"

10.3 Testing Integration Prompts

# Always include testing in feature requests: "Implement stress trend calculation with full TDD approach: 1. First, create Gherkin scenarios in features/stress_trends.feature 2. Generate step definitions for the scenarios 3. Write failing unit tests for trend calculation logic 4. Implement the business logic to make tests pass 5. Add integration tests for repository trend queries 6. Create UI tests for trend display components 7. Ensure all tests pass and maintain >90% coverage"

11. Advanced Cursor Features

11.1 Code Context Management

# Use Cursor's context features effectively: # - Pin important files (StressAnalysisManager, ECAPAStressEngine) # - Use @workspace to reference entire project context # - Use @folder to reference specific module context # - Reference specific files with @filename when needed # Example context-aware prompt: "Using @ECAPAStressEngine and @AudioPipeline, optimize the stress analysis pipeline to reduce memory usage while maintaining accuracy. The solution should integrate with existing @StressAnalysisManager orchestration."

11.2 Iterative Development

# Use Cursor's iteration features: # - Make small, incremental changes # - Test after each change # - Use "Continue" to build on previous responses # - Use "Edit" to refine specific code sections # Example iterative approach: 1. "Add basic audio recording functionality" 2. "Now add real-time audio level monitoring" 3. "Add audio quality validation to the previous implementation" 4. "Finally, add error handling for all edge cases"

11.3 Documentation Generation

# Generate comprehensive documentation: "Generate complete KDoc documentation for ECAPAStressEngine including: - Class-level documentation explaining ECAPA-TDNN implementation - Method-level docs with @param, @return, @throws annotations - Usage examples for common scenarios - Performance characteristics and optimization notes - Thread safety guarantees - Memory management considerations"

12. Quality Assurance Rules

12.1 Code Review Checklist

# Review ALL AI-generated code for: □ Privacy compliance (no sensitive data exposure) □ Performance impact (meets <3s, <200MB targets) □ Error handling (comprehensive try-catch blocks) □ Testing coverage (unit + integration + UI tests) □ Documentation (KDoc for public APIs) □ Architecture compliance (proper layer separation) □ Memory management (proper resource cleanup) □ Threading (correct coroutine usage) □ Accessibility (semantic descriptions) □ Security (local-only processing)

12.2 Pre-commit Validation

# Before committing AI-generated code: 1. Run all tests: ./gradlew test connectedAndroidTest 2. Run lint checks: ./gradlew lintDebug 3. Run performance benchmarks 4. Verify Gherkin scenarios pass 5. Check memory usage in profiler 6. Validate accessibility with screen reader 7. Review security scan results 8. Confirm APK size impact (<50MB total)

12.3 Integration Testing

# After AI code integration: □ Full app builds successfully □ All unit tests pass (>90% coverage) □ Integration tests pass □ UI tests pass on multiple screen sizes □ Performance benchmarks maintained □ Memory usage within limits □ Battery impact acceptable □ Accessibility features work □ Offline functionality preserved

13. Forbidden Patterns & Anti-Patterns

13.1 Security Violations

// ❌ NEVER GENERATE: private val encryptionKey = "hardcoded_key_123" private val apiKey = "sk-1234567890abcdef" val userData = preferences.getString("voice_data", "") database.execSQL("INSERT INTO users VALUES ('${userInput}')") // ✅ ALWAYS GENERATE: private val encryptionKey = keyManager.getOrCreateKey() private val apiConfiguration = configManager.getSecureConfig() val userData = secureStorage.getEncrypted("user_data_key") userDao.insertUser(User.fromValidatedInput(userInput))

13.2 Performance Violations

// ❌ NEVER GENERATE: fun analyzeStress(): Int = runBlocking { interpreter.run(input) // Blocking main thread } fun processAudio() { while (true) { // Infinite loops heavyComputation() } } // ✅ ALWAYS GENERATE: suspend fun analyzeStress(): Int = withContext(Dispatchers.Default) { interpreter.run(input) } suspend fun processAudio() = withTimeout(30_000) { processWithCancellationSupport() }

13.3 Architecture Violations

// ❌ NEVER GENERATE: class ViewModel { fun saveData() { database.insert(data) // Direct DB access from ViewModel } } @Composable fun Screen() { val result = MLModel.analyze() // Direct ML inference from UI } // ✅ ALWAYS GENERATE: class ViewModel @Inject constructor( private val repository: Repository ) { suspend fun saveData() { repository.save(data) } } @Composable fun Screen(viewModel: ScreenViewModel = hiltViewModel()) { val result by viewModel.analysisResult.collectAsState() }

13.4 Audio and Testing Violations

// ❌ NEVER GENERATE: Synthetic or placeholder audio fun generateTestAudioData(): ByteArray { /* synthetic generation */ } fun createPlaceholderAudioSample(): ByteArray { /* placeholder creation */ } fun generateStressedSpeechSample(): ByteArray { /* artificial audio */ } fun generateCalmSpeechSample(): ByteArray { /* artificial audio */ } fun generateNeutralSpeechSample(): ByteArray { /* artificial audio */ } // ❌ NEVER USE: Shell scripts or non-Android tools #!/bin/bash python test_models.py ./test_models.sh // ✅ ALWAYS USE: Real audio samples and Android testing private fun loadWavSample(filename: String): ByteArray { val resourceId = getResourceId(filename) val inputStream = context.resources.openRawResource(resourceId) return inputStream.readBytes() } @RunWith(AndroidJUnit4::class) class AndroidIntegrationTest { @Test fun testWithRealAudioSamples() { // Use real WAV samples from CREMA-D dataset } }

14. Success Metrics & Validation

14.1 Code Quality Metrics

# AI-generated code must achieve: - Compilation success rate: 100% - Test coverage: >90% - Performance regression: 0% - Security scan: 0 critical, <5 medium issues - Accessibility score: 100% (no violations) - Memory leak detection: 0 leaks - APK size increase: <5MB per major feature

14.2 Feature Completion Criteria

# Each AI-generated feature is complete when: □ All Gherkin scenarios pass □ Unit tests pass with >90% coverage □ Integration tests validate end-to-end flow □ UI tests cover critical user journeys □ Performance benchmarks maintained □ Accessibility requirements met □ Security review passed □ Code review approved □ Documentation updated □ Release notes prepared

14.3 Continuous Improvement

# Track and improve AI assistance: - Monitor code quality metrics before/after AI changes - Measure development velocity improvements - Track test coverage maintenance - Monitor performance regression prevention - Assess security issue reduction - Evaluate accessibility compliance improvement

15. Project-Specific Context

15.1 StressLess Domain Knowledge

# Ensure AI understands the domain: "StressLess is an offline-first voice stress analysis app using ECAPA-TDNN neural networks for speaker verification adapted to stress detection. Key constraints: - All processing must remain on-device - Voice data never transmitted over network - <3 second analysis time requirement - NPU acceleration when available - Material 3 design system - GDPR compliance mandatory - Supports Android 9+ (API 28+)"

15.2 Technology Stack Context

# Reference complete tech stack: "Current StressLess technology stack: - Kotlin with Coroutines for async processing - Jetpack Compose for UI with Material 3 - Hilt for dependency injection - Room + SQLCipher for encrypted local storage - LiteRT (TensorFlow Lite) for ML inference - Qualcomm QNN delegates for NPU acceleration - TarsosDSP for audio processing - Cucumber for Gherkin scenario testing - WorkManager for background processing"

15.3 Business Requirements Context

# Keep business context in mind: "StressLess MVP must deliver: - Real-time voice-based stress detection (1-10 scale) - Immediate wellness recommendations - Historical trend tracking - Complete offline functionality - Enterprise-grade privacy protection - Sub-3-second user experience - Battery-optimized processing - Accessibility compliance - Multi-language support ready"
21 September 2025