²ιΏ΄/±ΰΌ ΄ϊΒλ
ΔΪΘέ
# Detection System Improvements Plan **Project**: Landing System **Date**: January 28, 2026 **Status**: Planning Phase **Version**: 2.0 Enhancement --- ## Overview This document outlines the implementation plan for filling all identified gaps in the detection system across 4 areas: 1. **Timing-Based Detection** (minimal - no sender integration) 2. **Behavioral Scoring** (full implementation) 3. **Environment Fingerprinting** (full implementation) 4. **Content Strategy** (full implementation) --- ## Implementation Phases | Phase | Area | Priority | Status | |-------|------|----------|--------| | Phase 1 | Behavioral Scoring | HIGH | β Complete | | Phase 2 | Environment Fingerprinting | HIGH | β Complete | | Phase 3 | Content Strategy | MEDIUM | β Complete | | Phase 4 | Timing-Based Detection | LOW | β Complete (minimal) | --- # PHASE 1: BEHAVIORAL SCORING ## 1.1 Mouse Velocity Analysis **Gap**: No velocity analysis - bots move at constant speed **Current State**: - Only counts mouse movements - No speed/acceleration analysis **Implementation Plan**: ### Step 1: Data Collection - Store timestamp with each mouse position - Calculate velocity between consecutive points - Store velocity history array ### Step 2: Analysis Metrics - **Average velocity**: Humans vary between 100-2000 px/sec - **Velocity variance**: Humans have high variance, bots are constant - **Acceleration patterns**: Humans accelerate/decelerate naturally - **Pause detection**: Humans pause, bots don't ### Step 3: Scoring Rules | Pattern | Human | Bot | Score Impact | |---------|-------|-----|--------------| | Constant velocity (variance < 10%) | β | β | +40 sandbox | | No acceleration changes | β | β | +30 sandbox | | Velocity > 5000 px/sec | β | β | +50 sandbox | | No pauses in movement | β | β | +20 sandbox | ### Files to Modify - `templates/challenge/verify.html` --- ## 1.2 Mouse Path Analysis **Gap**: No path analysis - bots move in straight lines **Current State**: - No path tracking at all **Implementation Plan**: ### Step 1: Data Collection - Store array of (x, y, timestamp) coordinates - Limit to last 100 points (memory management) ### Step 2: Analysis Metrics - **Path curvature**: Calculate angle changes between segments - **Bezier deviation**: How much path deviates from straight line - **Direction changes**: Count significant direction changes (>15Β°) - **Jitter/noise**: Natural human movements have micro-jitter ### Step 3: Scoring Rules | Pattern | Human | Bot | Score Impact | |---------|-------|-----|--------------| | Perfectly straight lines | β | β | +50 sandbox | | No direction changes | β | β | +30 sandbox | | No micro-jitter | β | β | +20 sandbox | | Bezier deviation = 0 | β | β | +40 sandbox | ### Algorithm Pseudocode ``` function analyzePathCurvature(points): if points.length < 10: return 0 totalAngleChange = 0 for i = 1 to points.length - 1: angle = calculateAngle(points[i-1], points[i], points[i+1]) totalAngleChange += abs(angle) avgCurvature = totalAngleChange / points.length if avgCurvature < 5: // Nearly straight return +50 // Suspicious return 0 ``` ### Files to Modify - `templates/challenge/verify.html` --- ## 1.3 Click Position Tracking **Gap**: No click position tracking - bots click predictable spots **Current State**: - Only detects if click happened - No position analysis **Implementation Plan**: ### Step 1: Data Collection - Store click coordinates (x, y) - Store element clicked (button, link, etc.) - Store time since page load ### Step 2: Analysis Metrics - **Click accuracy**: Distance from element center - **Click distribution**: Are clicks always in same spot? - **Element targeting**: Did they click the actual interactive element? ### Step 3: Scoring Rules | Pattern | Human | Bot | Score Impact | |---------|-------|-----|--------------| | Click exactly at element center | β | β | +30 sandbox | | Click outside viewport | β | β | +50 sandbox | | Click at (0,0) or negative coords | β | β | +80 sandbox | | Instant click after page load | β | β | +40 sandbox | ### Files to Modify - `templates/challenge/verify.html` --- ## 1.4 Scroll Depth Analysis **Gap**: No scroll depth analysis - bots scroll instantly **Current State**: - Only counts scroll events - No depth or pattern analysis **Implementation Plan**: ### Step 1: Data Collection - Store scroll position over time - Track scroll velocity - Track scroll direction changes ### Step 2: Analysis Metrics - **Scroll velocity**: Pixels per second - **Scroll pattern**: Gradual vs instant - **Depth reached**: How far down the page - **Reading pattern**: Pause at content sections ### Step 3: Scoring Rules | Pattern | Human | Bot | Score Impact | |---------|-------|-----|--------------| | Instant scroll to bottom | β | β | +40 sandbox | | Constant scroll velocity | β | β | +25 sandbox | | No scroll pauses | β | β | +20 sandbox | | Scroll > 10000 px/sec | β | β | +50 sandbox | ### Files to Modify - `templates/challenge/verify.html` --- ## 1.5 Page Visibility API **Gap**: No detection of background tabs **Current State**: - Not implemented **Implementation Plan**: ### Step 1: Implementation - Use `document.visibilityState` API - Track time spent visible vs hidden - Detect if page was never visible ### Step 2: Scoring Rules | Pattern | Human | Bot | Score Impact | |---------|-------|-----|--------------| | Page never visible | β | β | +60 sandbox | | Hidden > 90% of time | β | β | +30 sandbox | | Interactions while hidden | β | β | +80 sandbox | ### Code Implementation ```javascript var visibilityData = { wasEverVisible: false, hiddenTime: 0, visibleTime: 0, lastChange: Date.now() }; document.addEventListener('visibilitychange', function() { var now = Date.now(); var elapsed = now - visibilityData.lastChange; if (document.hidden) { visibilityData.visibleTime += elapsed; } else { visibilityData.hiddenTime += elapsed; visibilityData.wasEverVisible = true; } visibilityData.lastChange = now; }); ``` ### Files to Modify - `templates/challenge/verify.html` --- ## 1.6 Reaction Time Patterns **Gap**: No analysis of human reaction time variance **Current State**: - Basic timing only **Implementation Plan**: ### Step 1: Data Collection - Track time between events (page load β first move, move β click) - Store array of reaction times ### Step 2: Analysis Metrics - **First interaction delay**: Humans take 500ms-5s to start - **Reaction time variance**: Humans are inconsistent - **Minimum reaction time**: Humans can't react < 150ms ### Step 3: Scoring Rules | Pattern | Human | Bot | Score Impact | |---------|-------|-----|--------------| | First interaction < 100ms | β | β | +50 sandbox | | All reactions identical timing | β | β | +30 sandbox | | Reaction variance = 0 | β | β | +40 sandbox | ### Files to Modify - `templates/challenge/verify.html` --- # PHASE 2: ENVIRONMENT FINGERPRINTING ## 2.1 AudioContext Fingerprint **Gap**: No AudioContext fingerprinting - unique per system **Current State**: - Not implemented **Implementation Plan**: ### Step 1: Implementation - Create AudioContext with OscillatorNode - Generate audio signal and analyze - Create hash from audio processing characteristics ### Step 2: Detection Logic - VMs and headless browsers have different audio processing - Missing AudioContext API is suspicious - Identical fingerprints across sessions = suspicious ### Code Implementation ```javascript function getAudioFingerprint() { try { var audioCtx = new (window.AudioContext || window.webkitAudioContext)(); var oscillator = audioCtx.createOscillator(); var analyser = audioCtx.createAnalyser(); var gainNode = audioCtx.createGain(); var scriptProcessor = audioCtx.createScriptProcessor(4096, 1, 1); gainNode.gain.value = 0; // Mute oscillator.type = 'triangle'; oscillator.frequency.value = 10000; oscillator.connect(analyser); analyser.connect(scriptProcessor); scriptProcessor.connect(gainNode); gainNode.connect(audioCtx.destination); oscillator.start(0); // Capture audio data var fingerprint = []; scriptProcessor.onaudioprocess = function(e) { var output = e.inputBuffer.getChannelData(0); for (var i = 0; i < output.length; i += 100) { fingerprint.push(output[i]); } }; // Return hash after short delay return hashArray(fingerprint); } catch(e) { return null; // No AudioContext } } ``` ### Scoring Rules | Pattern | Human | Bot | Score Impact | |---------|-------|-----|--------------| | No AudioContext API | ? | β | +20 sandbox | | AudioContext throws error | ? | β | +15 sandbox | | Known VM audio signature | β | β | +40 sandbox | ### Files to Modify - `templates/challenge/verify.html` --- ## 2.2 WebRTC IP Leak Detection **Gap**: No WebRTC IP leak check - reveals real IP vs VPN/proxy **Current State**: - Not implemented **Implementation Plan**: ### Step 1: Implementation - Create RTCPeerConnection - Generate SDP offer - Extract IP addresses from ICE candidates ### Step 2: Detection Logic - Compare WebRTC IP with HTTP request IP - Data center IPs in WebRTC = scanner - Multiple IPs = proxy/VPN ### Code Implementation ```javascript function detectWebRTCIPs(callback) { var ips = []; try { var pc = new RTCPeerConnection({ iceServers: [{urls: 'stun:stun.l.google.com:19302'}] }); pc.createDataChannel(''); pc.onicecandidate = function(e) { if (!e.candidate) { callback(ips); return; } var parts = e.candidate.candidate.split(' '); var ip = parts[4]; if (ip && ips.indexOf(ip) === -1) { ips.push(ip); } }; pc.createOffer().then(function(offer) { pc.setLocalDescription(offer); }); // Timeout fallback setTimeout(function() { callback(ips); }, 3000); } catch(e) { callback([]); } } ``` ### Step 3: Server-Side Comparison - Send detected IPs to server with verification request - Server compares with HTTP request IP - Flag mismatches ### Scoring Rules | Pattern | Human | Bot | Score Impact | |---------|-------|-----|--------------| | WebRTC blocked/unavailable | ? | ? | +10 sandbox | | WebRTC IP β HTTP IP | ? | β | +30 sandbox | | WebRTC IP is datacenter | β | β | +60 sandbox | | Multiple WebRTC IPs | ? | ? | +15 sandbox | ### Files to Modify - `templates/challenge/verify.html` - `index.php` (verification handler) --- ## 2.3 Font Enumeration **Gap**: No font detection - VMs have limited fonts **Current State**: - Not implemented **Implementation Plan**: ### Step 1: Implementation - Test for presence of common fonts - Measure text rendering differences - Count available fonts ### Step 2: Detection Logic - VMs typically have < 20 fonts - Real systems have 100+ fonts - Specific fonts indicate OS ### Code Implementation ```javascript function detectFonts() { var baseFonts = ['monospace', 'sans-serif', 'serif']; var testFonts = [ 'Arial', 'Arial Black', 'Comic Sans MS', 'Courier New', 'Georgia', 'Impact', 'Times New Roman', 'Trebuchet MS', 'Verdana', 'Calibri', 'Cambria', 'Consolas', 'Lucida Console', 'Segoe UI', 'Tahoma', // Mac fonts 'Helvetica', 'Monaco', 'SF Pro', // Linux fonts 'Ubuntu', 'DejaVu Sans', 'Liberation Sans' ]; var testString = 'mmmmmmmmmmlli'; var testSize = '72px'; var canvas = document.createElement('canvas'); var ctx = canvas.getContext('2d'); function getWidth(font) { ctx.font = testSize + ' ' + font; return ctx.measureText(testString).width; } var baseWidths = {}; baseFonts.forEach(function(font) { baseWidths[font] = getWidth(font); }); var detectedFonts = []; testFonts.forEach(function(font) { for (var i = 0; i < baseFonts.length; i++) { var testWidth = getWidth(font + ',' + baseFonts[i]); if (testWidth !== baseWidths[baseFonts[i]]) { detectedFonts.push(font); break; } } }); return { count: detectedFonts.length, fonts: detectedFonts }; } ``` ### Scoring Rules | Pattern | Human | Bot | Score Impact | |---------|-------|-----|--------------| | < 5 fonts detected | β | β | +40 sandbox | | < 10 fonts detected | ? | β | +25 sandbox | | Only base fonts | β | β | +35 sandbox | | No Windows/Mac specific fonts | ? | ? | +10 sandbox | ### Files to Modify - `templates/challenge/verify.html` --- ## 2.4 Media Device Enumeration **Gap**: No media device detection **Current State**: - Not implemented **Implementation Plan**: ### Step 1: Implementation - Use `navigator.mediaDevices.enumerateDevices()` - Count audio/video devices - Check for permission prompts ### Step 2: Detection Logic - Real systems have microphones/cameras - VMs often have none - Headless browsers have none ### Code Implementation ```javascript function detectMediaDevices(callback) { if (!navigator.mediaDevices || !navigator.mediaDevices.enumerateDevices) { callback({available: false, devices: []}); return; } navigator.mediaDevices.enumerateDevices() .then(function(devices) { var result = { available: true, audioinput: 0, audiooutput: 0, videoinput: 0, total: devices.length }; devices.forEach(function(device) { if (device.kind in result) { result[device.kind]++; } }); callback(result); }) .catch(function() { callback({available: false, error: true}); }); } ``` ### Scoring Rules | Pattern | Human | Bot | Score Impact | |---------|-------|-----|--------------| | API not available | ? | β | +15 sandbox | | Zero devices | ? | β | +20 sandbox | | No audio output | β | β | +25 sandbox | ### Files to Modify - `templates/challenge/verify.html` --- ## 2.5 Speech API Check **Gap**: No Speech API detection - missing in headless **Current State**: - Not implemented **Implementation Plan**: ### Step 1: Implementation - Check for `speechSynthesis` API - Check for `SpeechRecognition` API - Count available voices ### Code Implementation ```javascript function checkSpeechAPIs() { var result = { synthesis: false, recognition: false, voices: 0 }; // Speech Synthesis if ('speechSynthesis' in window) { result.synthesis = true; var voices = speechSynthesis.getVoices(); result.voices = voices.length; // Voices load async, check again if (result.voices === 0) { speechSynthesis.onvoiceschanged = function() { result.voices = speechSynthesis.getVoices().length; }; } } // Speech Recognition if ('SpeechRecognition' in window || 'webkitSpeechRecognition' in window) { result.recognition = true; } return result; } ``` ### Scoring Rules | Pattern | Human | Bot | Score Impact | |---------|-------|-----|--------------| | No speechSynthesis | ? | β | +15 sandbox | | Zero voices | ? | β | +10 sandbox | | No recognition API | ? | ? | +5 sandbox | ### Files to Modify - `templates/challenge/verify.html` --- ## 2.6 Cross-Reference Client vs Server Signals **Gap**: No verification that client fingerprint matches server headers **Current State**: - Client and server detection run independently - No cross-checking **Implementation Plan**: ### Step 1: Data to Cross-Reference | Client Signal | Server Signal | Check | |---------------|---------------|-------| | `navigator.platform` | User-Agent OS | Should match | | `screen.width/height` | N/A | Compare to known bot sizes | | WebRTC IPs | HTTP IP | Should overlap | | `navigator.language` | Accept-Language | Should match | | Timezone | IP geolocation | Should be reasonable | ### Step 2: Implementation - Send client signals to server with verification request - Server compares with headers already received - Flag mismatches ### Step 3: Server-Side Logic ```php function crossReferenceSignals($clientSignals, $serverHeaders) { $score = 0; $reasons = []; // Platform vs User-Agent $clientPlatform = $clientSignals['platform'] ?? ''; $serverUA = $serverHeaders['USER_AGENT'] ?? ''; if (stripos($clientPlatform, 'Win') !== false && stripos($serverUA, 'Windows') === false) { $score += 40; $reasons[] = 'platform_ua_mismatch'; } // Language check $clientLang = $clientSignals['language'] ?? ''; $serverLang = $serverHeaders['ACCEPT_LANGUAGE'] ?? ''; if (!empty($clientLang) && strpos($serverLang, substr($clientLang, 0, 2)) === false) { $score += 25; $reasons[] = 'language_mismatch'; } // WebRTC IP vs HTTP IP $webrtcIPs = $clientSignals['webrtcIPs'] ?? []; $httpIP = $_SERVER['REMOTE_ADDR']; if (!empty($webrtcIPs) && !in_array($httpIP, $webrtcIPs)) { $score += 30; $reasons[] = 'ip_mismatch'; } // Screen size check $width = $clientSignals['screenWidth'] ?? 0; $height = $clientSignals['screenHeight'] ?? 0; if (($width == 800 && $height == 600) || ($width == 1024 && $height == 768)) { $score += 30; $reasons[] = 'vm_screen_size'; } return ['score' => $score, 'reasons' => $reasons]; } ``` ### Files to Modify - `templates/challenge/verify.html` - `index.php` (verification handler) - New: `core/signal_validator.php` --- # PHASE 3: CONTENT STRATEGY ## 3.1 Working Internal Links **Gap**: Links within decoy pages go to 404 **Current State**: - Links like `/services/acupuncture` return 404 - Scanners follow links and see errors **Implementation Plan**: ### Step 1: Create Sub-Page Routes - Add routing in `index.php` for decoy sub-pages - Route `/services/*`, `/about`, `/contact`, etc. ### Step 2: Create Sub-Page Templates - Create simplified versions of sub-pages - Reuse header/footer from main decoy - Add unique content per page ### Step 3: Implementation ```php // In index.php, add before main routing // Decoy sub-pages (for scanners following links) $decoyPaths = [ '/services' => 'decoy/services.html', '/services/acupuncture' => 'decoy/service-detail.html', '/about' => 'decoy/about.html', '/contact' => 'decoy/contact.html', '/privacy-policy' => 'decoy/privacy.html', '/terms-of-service' => 'decoy/terms.html', ]; foreach ($decoyPaths as $path => $template) { if (strpos($_SERVER['REQUEST_URI'], $path) === 0) { // Serve decoy sub-page Response::decoy(basename($template, '.html')); exit; } } ``` ### Files to Create - `templates/decoy/services.html` - `templates/decoy/service-detail.html` - `templates/decoy/about.html` - `templates/decoy/contact.html` - `templates/decoy/privacy.html` - `templates/decoy/terms.html` ### Files to Modify - `index.php` --- ## 3.2 Real Images **Gap**: No real images - placeholder emojis **Current State**: - Using emoji as placeholders - No actual image files **Implementation Plan**: ### Step 1: Image Strategy - Use royalty-free stock images - Optimize for web (compress, resize) - Create consistent visual style ### Step 2: Required Images | Image | Purpose | Size | |-------|---------|------| | logo.png | Header logo | 200x50 | | hero-bg.jpg | Hero background | 1920x600 | | service-1.jpg | Service card | 400x300 | | service-2.jpg | Service card | 400x300 | | service-3.jpg | Service card | 400x300 | | team-1.jpg | Team member | 300x300 | | team-2.jpg | Team member | 300x300 | | about.jpg | About section | 600x400 | | og-image.jpg | Social sharing | 1200x630 | ### Step 3: Implementation - Create `public/images/decoy/` directory - Add images to directory - Update templates to reference images ### Files to Create - `public/images/decoy/` directory with images ### Files to Modify - All decoy templates --- ## 3.3 Working Forms **Gap**: Forms don't submit - dead endpoints **Current State**: - Newsletter form does nothing - Contact form does nothing **Implementation Plan**: ### Step 1: Create Honeypot Endpoints - Forms submit to real endpoints - Endpoints accept data but do nothing - Return success response ### Step 2: Implementation ```php // In index.php, add form handlers if ($_SERVER['REQUEST_METHOD'] === 'POST') { $path = parse_url($_SERVER['REQUEST_URI'], PHP_URL_PATH); // Newsletter signup (honeypot) if ($path === '/subscribe') { // Log as scanner activity log_info("Honeypot: newsletter form submitted"); // Return success page header('Location: /thank-you'); exit; } // Contact form (honeypot) if ($path === '/contact-submit') { log_info("Honeypot: contact form submitted"); header('Location: /thank-you'); exit; } } ``` ### Step 3: Thank You Page - Create generic thank you page - Redirects back to main page after delay ### Files to Create - `templates/decoy/thank-you.html` ### Files to Modify - `index.php` - All decoy templates (update form actions) --- ## 3.4 Dynamic Content **Gap**: Static content - same page every time **Current State**: - Exact same content on every load - Easy to fingerprint **Implementation Plan**: ### Step 1: Randomization Points | Element | Randomization | |---------|---------------| | Testimonial names | Pool of 20 names | | Statistics numbers | Β±5% variance | | Team member order | Shuffle | | Service order | Shuffle | | Review dates | Recent random dates | ### Step 2: Implementation ```php // In Response::decoy() function randomizeContent($html) { // Randomize testimonial names $names = ['Jennifer H.', 'Mark T.', 'Sarah L.', 'David M.', 'Emma K.']; shuffle($names); // Randomize stats $patients = rand(9500, 10500); $rating = number_format(rand(47, 50) / 10, 1); $reviews = rand(230, 260); // Apply replacements $html = str_replace('10,000+', number_format($patients) . '+', $html); $html = str_replace('4.9β ', $rating . 'β ', $html); $html = str_replace('247', $reviews, $html); return $html; } ``` ### Files to Modify - `core/response.php` --- ## 3.5 Deep Page Structure **Gap**: Single page only - no site depth **Current State**: - Only main decoy page - Scanners can tell it's shallow **Implementation Plan**: ### Step 1: Site Structure ``` / β Main business page βββ /services β Services overview β βββ /acupuncture β Service detail β βββ /pain-management β βββ /stress-relief βββ /about β About page βββ /team β Team page βββ /contact β Contact page βββ /blog β Blog listing β βββ /blog/post-1 β Blog post βββ /faq β FAQ page βββ /privacy-policy β Legal βββ /terms-of-service β Legal ``` ### Step 2: Template Hierarchy - Create base template with header/footer - Create page-specific content blocks - Reuse components across pages ### Step 3: Implementation - Create all sub-page templates - Add routing for all paths - Ensure inter-linking works ### Files to Create - 10+ additional decoy templates ### Files to Modify - `index.php` (routing) - `core/response.php` (template loading) --- # PHASE 4: TIMING-BASED DETECTION (Minimal) ## 4.1 Session Duration Tracking **Note**: Skipping email send time tracking since sender is not integrated. **Implementation**: Track time on page before interaction ### Data Points - Time from page load to first interaction - Time from first interaction to form submission - Total session duration ### Scoring Rules | Pattern | Human | Bot | Score Impact | |---------|-------|-----|--------------| | Interaction < 500ms after load | β | β | +40 sandbox | | Total session < 1 second | β | β | +50 sandbox | | Form submit < 2 seconds | β | β | +35 sandbox | --- # IMPLEMENTATION CHECKLIST ## Phase 1: Behavioral Scoring - [x] 1.1 Mouse velocity analysis - [x] 1.2 Mouse path analysis - [x] 1.3 Click position tracking - [x] 1.4 Scroll depth analysis - [x] 1.5 Page Visibility API - [x] 1.6 Reaction time patterns ## Phase 2: Environment Fingerprinting - [x] 2.1 AudioContext fingerprint - [x] 2.2 WebRTC IP leak detection - [x] 2.3 Font enumeration - [x] 2.4 Media device enumeration - [x] 2.5 Speech API check - [x] 2.6 Cross-reference client vs server ## Phase 3: Content Strategy - [x] 3.1 Working internal links - [x] 3.2 Real images (Unsplash CDN - wellness, team, services) - [x] 3.3 Working forms (honeypot) - [x] 3.4 Dynamic content randomization - [x] 3.5 Deep page structure ## Phase 4: Timing (Minimal) - [x] 4.1 Session duration tracking (via totalTime + firstInteractionTime) --- # TESTING PLAN ## Test Cases ### Behavioral Tests 1. Use Puppeteer/Playwright to test detection 2. Test with instant mouse movements 3. Test with no mouse movements 4. Test with background tab ### Fingerprint Tests 1. Test in headless Chrome 2. Test in VM (VirtualBox) 3. Test with VPN 4. Test with different browsers ### Content Tests 1. Follow all internal links 2. Submit all forms 3. Check for 404 errors 4. Verify randomization --- # RISK ASSESSMENT | Risk | Impact | Mitigation | |------|--------|------------| | False positives on real users | HIGH | Tune thresholds carefully | | Performance impact | MEDIUM | Async operations, caching | | Browser compatibility | MEDIUM | Feature detection, fallbacks | | Maintenance burden | LOW | Modular code, documentation | --- # SUCCESS METRICS | Metric | Current | Target | |--------|---------|--------| | Scanner detection rate | ~70% | 95%+ | | False positive rate | ~5% | <1% | | Challenge bypass rate | Unknown | <5% | | Average detection time | N/A | <100ms | --- **Document Status**: Ready for Implementation **Next Step**: Begin Phase 1 - Behavioral Scoring **Estimated Effort**: 2-3 weeks for all phases