LABORATORY — QUALITY TESTING & COMPLIANCE
The quality gatekeeper of the factory. Every cable produced must pass through lab testing at defined intervals before it can move to the next production step or ship to a customer. The Lab module bridges Teknik (test standards), Production (test requests), and Quality (pass/fail with measurements).
The Laboratory module is the factory’s quality conscience. It does not create work — it receives work from Production in the form of test requests that are automatically generated at three critical intervals: production start, after every basket/reel output, and production end. These intervals are not arbitrary; they are defined in the cable design during the Teknik phase and embedded into work cards. When a test request arrives, a lab user opens a dynamically-generated measurement form whose fields, units, sample counts, and parameters are all pulled from the test standard — no two test forms look alike. The lab user records measurements, marks the test as passed or failed, and the system notifies the production operator. If a test fails, the system automatically creates a retry request. Tests are bonded to specific production outputs, creating full traceability from a finished cable reel back to every quality check it underwent. The Lab module is the reason the factory can prove compliance to IEC-EN, UL, and custom SLN standards.
TABLE OF CONTENTS
1. WHAT LABORATORY DOES
The Laboratory module answers four questions that the entire production system depends on:
- “Does this material meet the standard?” — Every cable design specifies which tests must be performed and at what frequency. The Lab executes these tests and records quantitative measurements against defined parameters.
- “Can production continue?” — Production pages poll test status every 5 seconds. If tests for a given interval are still pending or have failed, the operator sees the status. Passing all tests clears the path for the next step.
- “What happened when a test failed?” — Failed tests automatically create retry requests. The retry chain is tracked via
parent_request_id, so the system knows this is the 2nd or 3rd attempt at the same test for the same output. - “Can we prove compliance?” — Every test result is permanently linked to the specific production output (basket/reel) it covers. Tests are bonded to outputs either directly (
output_id) or at the lot level (linked_output_ids). This creates an unbroken traceability chain from finished product back through every quality check.
Key architectural insight: The Lab module does not define what tests to run — that comes from the Teknik module’s test standards and the cable design. The Lab module does not decide when to run them — that comes from the Production module which creates test requests at design-specified intervals. The Lab module’s sole responsibility is execution and recording: receive a test request, generate the measurement form, capture results, and report back. This separation makes it impossible for production to skip a required test, because the tests are embedded in the design itself.
2. THE DATA FLOW
The Lab module sits at the intersection of three other modules. Data flows into Lab from Teknik and Production, and results flow out back to Production and the notification system.
2.1 The Test Request Lifecycle
2.2 The Three Test Intervals
| Interval | Turkish Name | When Created | Scope | Bonding |
|---|---|---|---|---|
| üretim_başı | Üretim Başı | Session start | Entire production lot | Linked to first output (or all outputs via linked_output_ids) |
| her_sepet_sonu | Her Sepet Sonu | After each output | Specific basket/reel | Directly linked via output_id |
| üretim_sonu | Üretim Sonu | Session end | Entire production lot | Linked to last output (or all outputs via linked_output_ids) |
These intervals are not hard-coded in the Lab module. They come from the cable design’s material_details.tests[].frequency field, which supports comma-separated values (e.g., "üretim_başı,her_sepet_sonu" means a test runs both at start and after every basket).
2.3 What Happens When a Test Fails
The retry mechanism is automatic. When a lab user marks a test as failed, the backend immediately creates a new ProductionTestRequest with retry_count = original + 1 and parent_request_id = original.id. This chains retries together for full audit history. The new request appears in the Lab’s pending queue, and the production operator receives a high-priority notification.
2.4 The Notification Bridge
| Event | Recipients | Priority | Deep Link |
|---|---|---|---|
| Test request created | All lab_user + super_admin (excludes requester) | High | /lab/test with session_id, work_card_id |
| Test passed | Requester + super_admin (excludes tester) | Medium | /lab/test with test_request_id |
| Test failed | Requester + super_admin (excludes tester) | High | /lab/test with test_request_id, retry_request_id |
3. THE DATABASE LAYER
The Lab module uses 5 tables. Two are the core test request tables (legacy + production-integrated), and three come from the Teknik module’s test standards system.
3.1 production_test_requests (20 columns) — The Primary Table
This is the real workhorse. Every test request created by Production lives here.
| Column | Type | Details |
|---|---|---|
id | Integer PK | Auto-increment, indexed |
work_card_id | Integer FK | References work_cards.id, indexed |
session_id | Integer FK | References production_sessions.id, indexed |
output_id | Integer FK | References production_outputs.id — set for her_sepet_sonu tests |
linked_output_ids | JSON | Array of output IDs for lot-level tests (üretim_başı, üretim_sonu) |
test_standard_id | Integer FK | References test_standards.id |
machine_type | String(50) | Which machine: kabatel_cekme, kalaylama, incetel_cekme, buncher, extruder, ebeam |
test_interval | String(50) | üretim_başı / her_sepet_sonu / üretim_sonu |
input_slot | Integer | 0 or 1 for parallel production mode |
retry_count | Integer | Default 0. Incremented on each retry. |
parent_request_id | Integer FK | Self-reference — points to original failed test |
status | Enum | PENDING / PASSED / FAILED |
measurements | JSON | Structure: { samples: [{ sample_num, params: { name: value } }] } |
requested_by | Integer FK | User who triggered creation (usually production operator) |
requested_at | DateTime | UTC |
tested_by | Integer FK | Lab user who completed the test |
tested_at | DateTime | When test was completed |
notes | Text | Lab user notes |
created_at | DateTime | UTC |
updated_at | DateTime | Auto-updated |
3.2 test_requests (15 columns) — Legacy Table
The original test request system, still operational for backward compatibility. Uses material_id and material_qr instead of work card/session linkage. Status enum differs: test_bekleniyor, test_basarili, test_basarisiz, iptal. The production-integrated production_test_requests table is the primary system going forward.
3.3 test_standards (from Teknik module)
Defines what to test. Each standard has a category (IEC-EN, UL, SLN), test name, number of required samples, test method, and a set of parameters. Lab reads this to generate dynamic measurement forms.
| Column | Details |
|---|---|
standard_category | IEC-EN, UL, or SLN (custom factory standard) |
test_name / test_name_en | Turkish and English test names |
standard_number | Reference standard (e.g., “IEC 60228”) |
test_samples | Number of samples required (≥ 1) |
test_method | Description of how to perform the test |
3.4 test_parameters (from Teknik module)
Each test standard has one or more parameters. These are the actual measurement fields that appear in the lab’s dynamic form.
| Column | Details |
|---|---|
parameter_name | e.g., “Diameter”, “Resistance”, “Tensile Strength” |
parameter_unit | e.g., “mm”, “Ω/km”, “N/mm²” |
parameter_order | Display order in the form |
is_required | Whether measurement is mandatory |
3.5 test_results (from Teknik module)
Stores historical test results linked to cable batches. Includes pass_fail status (PASS/FAIL/PENDING), measurements JSON, and links to the test standard and production session.
Dual-system architecture: The database has both test_requests (legacy, material-based) and production_test_requests (current, production-based). The legacy system was the first implementation, linking tests to raw materials. The production system evolved to link tests to work cards, sessions, and specific outputs. Both coexist — the legacy endpoints are still functional but the production-integrated system is what drives the factory floor today.
4. THE BACKEND ARCHITECTURE
4.1 Route Files
| File | Lines | Prefix | Endpoints | Purpose |
|---|---|---|---|---|
test_routes.py | 318 | /api/test-requests | 6 | Legacy test request CRUD. Create, list, get, complete, update, delete. |
test_integration_routes.py | 745 | /api/production/test-requests | 9 | Production-integrated test system. Create for interval, check status, pending, all, details, complete, delete, bonded tests by output ID and output code. |
4.2 API Contract — Production Test Endpoints (9)
| Method | Path | Permission | What It Does |
|---|---|---|---|
POST | /api/production/test-requests/create-for-interval | authenticated | Create test requests for a specific interval. Reads tests from work card’s material_details.tests, filters by frequency, creates ProductionTestRequest per matching test. Sends notifications to lab users. |
GET | /api/production/test-requests/check-status/{session_id}/{interval} | authenticated | Check if all tests for an interval are passed. Returns all_passed boolean, pending/failed/passed counts, and lists of pending/failed tests. Used by production frontend for status polling. |
GET | /api/production/test-requests/pending/{session_id} | authenticated | Get all pending tests for a session. Eager-loads test_standard and requester. |
GET | /api/production/test-requests/all | authenticated | Get all production test requests (for Lab page). Includes machine names, test standard info. Filterable by status. Limit default 50. |
GET | /api/production/test-requests/{test_id} | authenticated | Get a specific test request with full details: test standard with sorted parameters, work card info. |
POST | /api/production/test-requests/{test_id}/complete | lab_user, super_admin | Complete a test. Sets status to PASSED or FAILED, stores measurements JSON, records tester. If failed: auto-creates retry request. Sends notifications. |
DELETE | /api/production/test-requests/{test_id} | super_admin | Delete a test request. |
GET | /api/production/test-requests/bonded-tests/output/{output_id} | authenticated | Get all tests bonded to a specific output. Includes direct (output_id match), lot-level (linked_output_ids contains), and legacy (session match). Groups by interval. |
GET | /api/production/test-requests/bonded-tests/output-code/{output_code} | authenticated | Same as above but by output code (e.g., X1, X2). Resolves code to ID internally. |
4.3 Legacy Test Endpoints (6)
| Method | Path | Permission | What It Does |
|---|---|---|---|
POST | /api/test-requests/ | authenticated | Create legacy test request (material-based). |
GET | /api/test-requests/ | authenticated | List legacy test requests with filters. |
GET | /api/test-requests/{id} | authenticated | Get specific legacy test request. |
POST | /api/test-requests/{id}/complete | lab_user, super_admin | Complete legacy test with pass/fail and results. |
PUT | /api/test-requests/{id} | super_admin | Update legacy test request. |
DELETE | /api/test-requests/{id} | super_admin | Delete legacy test request. |
4.4 How Production Creates Test Requests
Test requests are not created by the Lab module. They are created by Production at three points in the session lifecycle:
- Session start (
POST /api/production/start-session): readswork_card.material_details.tests, filters forüretim_başıfrequency, creates oneProductionTestRequestper matching test (one per slot in parallel mode). - Output recording (
POST /api/production/record-output): filters forher_sepet_sonufrequency, creates test requests linked to the specificoutput_id. - Session end (
POST /api/production/end-session): filters forüretim_sonufrequency, creates test requests, then links lot-level tests (üretim_başı and üretim_sonu) to all relevant outputs vialinked_output_ids.
4.5 The Bonded Tests System
Tests are bonded to production outputs in three ways, checked in this order:
- Direct bond:
output_idmatches the output — used forher_sepet_sonutests that apply to a single basket/reel. - Lot-level bond:
session_idmatches ANDtest_intervalisüretim_başıorüretim_sonu— these lot-level tests apply to all outputs in the session. - Legacy bond:
session_idmatches ANDoutput_id IS NULLANDlinked_output_ids IS NULL— backward compatibility for old test requests.
The GET /bonded-tests/output/{output_id} endpoint returns tests grouped by interval, giving a complete quality picture for any single output.
4.6 Parallel Production Support
Some machines (e.g., OTOMEC Kalaylama, CGN E-beam) support parallel processing with two input slots. The Lab module handles this through the input_slot field (0 or 1). At session start and end, one test request is created per slot for lot-level tests. Slot-specific outputs are linked via source_qr_codes in HalfProductStock.
5. THE FRONTEND
5.1 Page Structure
| Route | Component | Lines | Purpose |
|---|---|---|---|
/lab/dashboard | Lab/Dashboard/index.tsx | 133 | Overview panel with statistics cards and alert/test tables. Currently uses mock data — will be connected to real APIs. |
/lab/test | Lab/Test/index.tsx | 803 | The primary Lab page. ProTable of all production test requests with dynamic measurement forms, completion workflow, and result viewing. |
5.2 The Test Page — Core Lab Interface
This is where lab users spend most of their time. It shows all production test requests in a ProTable with 9 columns and supports the complete test lifecycle.
ProTable Columns (9)
| # | Column | Width | Details |
|---|---|---|---|
| 1 | Kod | 70px, fixed left | Purple tag: L{id} |
| 2 | Test | auto | Test name + standard number |
| 3 | İş Kartı | 80px | Blue tag: #{work_card_id} |
| 4 | Makine | 150px | Machine name (Turkish) |
| 5 | Aralık | 100px | Interval tag: Üretim Başı / Her Sepet / Üretim Sonu |
| 6 | Talep | 130px | Requested at (DD.MM.YY HH:mm, Turkey timezone) |
| 7 | Durum | 100px | Status tag: Beklemede (orange), Onaylandı (green + check), Başarısız (red + X) |
| 8 | Test Eden | 100px | Tester name or “-” |
| 9 | İşlemler | 140px, fixed right | Conditional: “Test Başlat” (pending + lab_user), “Görüntüle” (completed), Delete (super_admin) |
Dynamic Test Forms
When a lab user clicks “Test Başlat”, the system fetches the full test standard from GET /api/teknik/standards/tests/{id}/full including all parameters. The form is generated dynamically:
- For each sample (1 to
test_samples): a divider “Numune {n}” - For each parameter within each sample: an
InputNumberfield with the parameter name as label, unit as addon, min/max bounds, step 0.01, and required flag - A notes textarea at the bottom
- Three action buttons: Cancel, Başarısız (Failed, danger), Onaylandı (Passed, primary)
The measurement data is assembled into a structured JSON: { samples: [{ sample_num: 1, params: { "Diameter": 1.82, "Resistance": 9.45 } }] }
Real-Time Updates
The test list polls every 5 seconds via setInterval(fetchTestRequests, 5000). No WebSocket is used for the Lab page — polling was chosen for simplicity and reliability.
Result Viewing
Clicking “Görüntüle” on a completed test opens a read-only modal with a compact measurement table: column headers are parameter names (with units), rows are samples, cells are recorded values. Notes are shown below the table.
5.3 The Dashboard — Overview Panel
Currently shows 4 statistic cards (Pending Alerts, Active Tests, Completed Tests, Today’s Tests) and two tables (Pending Lab Alerts, Active Test Processes). All data is currently mock/hardcoded — this is a prepared shell that will be connected to real APIs as the Lab module matures.
5.4 Client-Side Search
The Test page implements comprehensive client-side search filtering across: test code (L{id}), test name, standard number, work card ID, machine name/type, interval text, tester name, and status (in Turkish).
5.5 Permission System
| Page | Route Guard | Buttons in Manifest |
|---|---|---|
| Lab Dashboard | canLab | access_page, view_stats |
| Lab Test | canLab | access_page, create_test, view_tests |
User type checks in the Test page: isLabUser (lab_user or super_admin) can start/complete tests. isSuperAdmin can delete test requests. Regular users have read-only access.
6. LAB PANEL (DASHBOARD)
6.1 Purpose
The Lab Panel is the landing page for laboratory users at /lab/dashboard. Its purpose is to provide an at-a-glance overview of the lab’s workload: how many alerts are pending, how many tests are active, how many were completed, and what happened today. It is the first thing a lab user sees after logging in.
6.2 Current Status — Prepared Shell
The Dashboard is currently a prepared shell with mock data. All statistics and table rows are hardcoded — no API calls are made. This is intentional: the shell was built first to define the layout and information architecture, and will be connected to real endpoints as the Lab module matures. The component is 133 lines of clean React code, fully styled and ready for integration.
6.3 Layout & Components
The page uses Ant Design’s grid system with a PageContainer wrapper that displays the title “Lab Kontrol Paneli” and a personalized greeting showing the logged-in user’s name.
Statistics Row (4 cards)
Four Statistic cards in a responsive 4-column grid (Col lg={6}), each with an icon and color-coded value:
| # | Card Title | Mock Value | Icon | Color |
|---|---|---|---|---|
| 1 | Bekleyen Uyarılar (Pending Alerts) | 2 | AlertOutlined | Error (red) |
| 2 | Aktif Testler (Active Tests) | 2 | ExperimentOutlined | Warning (orange) |
| 3 | Tamamlanan Testler (Completed Tests) | 15 | CheckCircleOutlined | Success (green) |
| 4 | Bugünkü Testler (Today’s Tests) | 8 | ClockCircleOutlined | Primary (blue) |
Data Tables (2 side-by-side)
Below the stats, two tables sit in a responsive 2-column layout (Col lg={12}):
| Table | Card Header | Badge | Columns | Mock Rows |
|---|---|---|---|---|
| Bekleyen Lab Uyarıları | Pending Lab Alerts | Acil (red tag) | Makine, Uyarı, Saat, İşlem | 2 rows: Kabatel Çekme 01 (new production alert), Kalaylama 01 (slow mode approval) |
| Aktif Test Süreçleri | Active Test Processes | Devam Ediyor (processing tag) | Test Kodu, Ürün, Durum, Öncelik, İşlem | 2 rows: SLHT001 - 1.8mm Tel (testing, high priority), SLDT002 - Kalay Tel (awaiting approval, normal) |
Each table has an action button column: “Test Başlat” for alerts, “Onayla” for active tests. These buttons are currently non-functional placeholders.
6.4 Authentication Integration
Despite being mock, the Dashboard already integrates with the real auth system. It calls RealAuthService.getCurrentUser() to display the logged-in user’s name in the subtitle. The route is guarded by the canLab permission in the route configuration.
6.5 Roadmap
When connected to real APIs, the Dashboard will pull live data from:
GET /api/production/test-requests/all?status=pending→ Pending count and alert tableGET /api/production/test-requests/all?status=passed→ Completed count- Aggregated daily statistics from test request timestamps
- Real-time alert integration with the notification system
Why ship a mock page? In a factory ERP, the dashboard layout is critical for user adoption. Building the shell first — with realistic mock data and the exact card/table structure — allows stakeholders to validate the information hierarchy before any backend work is connected. The 133-line component is intentionally minimal so that swapping mock data for real API calls is a straightforward integration task.
7. TEST YÖNETİMİ (TEST MANAGEMENT)
7.1 Purpose & Scale
Test Yönetimi is the operational heart of the Laboratory module. Located at /lab/test, it is where lab users receive, execute, and record every quality test the factory runs. The frontend component is 803 lines of React/TypeScript, the backend route file is 745 lines of Python/FastAPI, and together they handle 9 production-integrated API endpoints plus 6 legacy endpoints. Every interaction between Production and the Lab flows through this page.
7.2 Page Title & Badge
The page title is “Test Talepleri” (Test Requests), rendered inside a PageContainer. Next to the title, a live Badge shows the count of pending tests. This badge updates every 5 seconds alongside the data, giving the lab user an immediate visual cue of their workload without scrolling.
7.3 The ProTable — 9-Column Test Queue
The core of the page is an Ant Design Pro ProTable configured with horizontal scroll (x: 1100), built-in pagination, density toggle, column visibility settings, and manual reload. The table fetches up to 100 records from GET /api/production/test-requests/all and displays them in 9 columns:
| # | Column | Width | Fixed | Rendering Details |
|---|---|---|---|---|
| 1 | Kod | 70px | Left | Purple monospace Tag: L{id}. Unique identifier per test request. |
| 2 | Test | auto | — | Two lines: test name (bold) from test_standard.test_name, standard number below in smaller gray text. |
| 3 | İş Kartı | 80px | — | Blue Tag: #{work_card_id}. Links the test to its production work card. |
| 4 | Makine | 150px | — | Machine display name. Uses machine_name from backend (brand – model) or falls back to a client-side lookup mapping machine_type codes to Turkish names (Kabatel Çekme, Kalaylama, İncetel Çekme, Buncher, Extruder, E-Beam, Aktarma, Paletleme). |
| 5 | Aralık | 100px | — | Interval Tag: maps üretim_başı → “Üretim Başı”, her_sepet_sonu → “Her Sepet”, üretim_sonu → “Üretim Sonu”. |
| 6 | Talep | 130px | — | Requested timestamp formatted as DD.MM.YY HH:mm in Turkey timezone (Europe/Istanbul). Uses dayjs with UTC-to-local conversion. |
| 7 | Durum | 100px | — | Status tag: Beklemede (orange), Onaylandı (green + check icon), Başarısız (red + X icon). |
| 8 | Test Eden | 100px | — | Tester’s name if completed, or “–” if still pending. |
| 9 | İşlemler | 140px | Right | Conditional buttons: ExperimentOutlined (start test — pending + lab_user only), EyeOutlined (view results — completed tests), DeleteOutlined with Popconfirm (super_admin only). |
7.4 Client-Side Search
The toolbar renders a search input with SearchOutlined icon (200px wide, rounded). Search is entirely client-side using useMemo: it filters the full dataset against 10+ fields simultaneously:
- Test code (
L{id}) - Test name and standard number
- Work card ID
- Machine name (from API) and machine type (from client-side mapping)
- Interval text (Turkish display names)
- Tester name
- Status in Turkish: “beklemede”, “onaylandı”, “başarısız”
7.5 Real-Time Polling
On mount, the component calls fetchTestRequests() and starts a setInterval at 5-second intervals. The cleanup function clears the interval on unmount. This provides near-real-time updates without WebSocket complexity. When a new test request arrives from Production, it appears in the table within 5 seconds.
7.6 Notification Deep-Link Support
The component reads URL query parameters on mount via useLocation(). When a lab user clicks a notification (e.g., “Test Talebi: Direnç Testi”), they are navigated to /lab/test?test_request_id=42. The page displays an info message: “Test talebi görüntüleniyor: #42”. This creates a seamless flow from notification to action.
7.7 The Dynamic Test Form
This is the most technically sophisticated part of the Lab module. When a lab user clicks the experiment icon on a pending test, a centered Modal (700px wide, blurred backdrop) opens. The modal title shows the test name with a subtitle: “İş Kartı #{work_card_id} • {interval}”. The form is generated entirely from the test standard’s parameter definitions.
Step 1: Fetch Test Standard
The system calls GET /api/teknik/standards/tests/{id}/full to retrieve the complete test standard including all parameters sorted by parameter_order. While loading, a Spin component with “Test standardı yükleniyor...” is displayed.
Step 2: Generate Form Fields
The form generation algorithm iterates over test_samples (number of samples) × parameters (measurement fields):
For each sample (1 to test_samples):
If multiple samples → render Divider "Numune {n}"
For each parameter in test_standard.parameters:
Render InputNumber with:
• name: "sample_{s}_param_{param.id}"
• label: parameter_name (e.g. "Diameter", "Resistance")
• addonAfter: parameter_unit (e.g. "mm", "Ω/km")
• placeholder: "Hedef: {target_value}" if defined
• min/max bounds from parameter definition
• step: 0.01
• required flag from is_required
• flex layout: 200px min-width, wrapping
This means a test standard with 3 parameters and 2 samples generates 6 input fields across 2 sample groups. A standard with 1 parameter and 1 sample generates a single field. The form always adapts — no two test forms look alike.
Step 3: Edge Cases
- No parameters defined: Shows message “Bu test için parametre tanımlanmamış” with just the test name. The user can still pass/fail the test without measurements.
- Standard not found: Shows “Test standardı bulunamadı”.
Step 4: Notes Field
Below all measurement fields, a TextArea (2 rows) labeled “Notlar” with placeholder “Test ile ilgili notlar...” allows the lab user to add observations.
Step 5: Action Buttons
Three buttons in a flex row at the bottom:
| Button | Style | Icon | Action |
|---|---|---|---|
| İptal | Default | — | Closes modal, resets form |
| Başarısız | Danger (red) | CloseCircleOutlined | Calls handleTestComplete(false) |
| Onaylandı | Primary (green, uses token.colorSuccess) | CheckCircleOutlined | Calls handleTestComplete(true) |
7.8 Test Completion & Measurement Assembly
When the lab user clicks either “Başarısız” or “Onaylandı”, the system:
- Validates all required form fields via
form.validateFields() - Assembles the measurement JSON from the dynamic field names:
{ "samples": [ { "sample_num": 1, "params": { "Diameter": 1.82, "Resistance": 9.45 } }, { "sample_num": 2, "params": { "Diameter": 1.81, "Resistance": 9.52 } } ] } - Sends
POST /api/production/test-requests/{id}/completewith{ passed: true/false, measurements: {...}, notes: "..." } - On success: shows “Test onaylandı” or “Test başarısız”, closes modal, resets form, refreshes the table
7.9 Auto-Retry on Failure
When the backend receives a passed: false completion, it does three things atomically in a single database transaction:
- Updates the original: Sets status to
FAILED, records tester, timestamp, and measurements - Creates a retry: A new
ProductionTestRequestwith identicalwork_card_id,session_id,output_id,test_standard_id,machine_type, andtest_interval— but withretry_count = original + 1andparent_request_id = original.id - Sends notifications: High-priority notifications to the original requester and all super admins, with deep link to the test page and
retry_request_idin extra data
The retry request appears in the Lab queue as a new pending test within 5 seconds (next poll cycle). The API response includes retry_created: true and retry_request_id so the frontend knows a retry was spawned.
7.10 Viewing Completed Test Results
For completed tests (passed or failed), the eye icon opens a read-only Modal (480px wide, blurred backdrop). The title shows the test name with a pass/fail tag, plus metadata: L{id} • #{work_card_id} • DD.MM HH:mm.
The result display renders a compact measurement table:
- Header row: Column
#(sample number) followed by one column per parameter name, with the unit shown in smaller text below - Data rows: One row per sample, with monospace values aligned under each parameter
- Notes: If the lab user wrote notes, they appear in a gray box below the table
- No measurements: If no parameter measurements were recorded, shows “Parametre ölçümü kaydedilmemiş”
7.11 Delete Functionality
Only super_admin users see the delete button (red trash icon). Clicking it shows a Popconfirm: “Silmek istediğinize emin misiniz?” with Evet/Hayır options. Confirmed deletions call DELETE /api/production/test-requests/{id} and optimistically remove the row from state.
7.12 Permission Model
| Action | Required Role | Implementation |
|---|---|---|
| View test list | Any authenticated user with canLab | Route guard in routes.ts |
| Start / complete a test | lab_user or super_admin | Frontend: isLabUser check hides button. Backend: user_type not in [’lab_user’, ’super_admin’] returns 403. |
| Delete a test request | super_admin | Frontend: isSuperAdmin check hides button. Backend: user_type != ’super_admin’ returns 403. |
7.13 Backend: How Production Creates Test Requests
Test requests originate from the Production module, not from Lab. The POST /api/production/test-requests/create-for-interval endpoint is called at three lifecycle points:
The frequency field supports comma-separated values (e.g., "üretim_başı,her_sepet_sonu"), meaning a single test can be triggered at multiple intervals. For each matching test, the system verifies the TestStandard exists in the database before creating the request.
7.14 Backend: The “All Tests” Endpoint
The GET /api/production/test-requests/all endpoint powers the Lab page. It eager-loads 5 relationships (test_standard, work_card, session, requester, tester) in a single query, supports status filtering, limits to 50 by default (frontend requests 100), and resolves machine names by looking up the session’s machine_id against the correct machine table for the machine_type. The response also includes a global pending_count for the title badge.
7.15 Backend: The Bonded Tests System
Two endpoints serve bonded test queries — by output ID and by output code (e.g., X1, X2). They return all tests that apply to a specific production output, grouped by interval. The bonding logic checks three levels:
- Direct bond:
output_idmatches —her_sepet_sonutests tied to a specific basket/reel - Lot-level bond: Same session + interval is
üretim_başıorüretim_sonu— these lot-level tests apply to every output in the session - Legacy bond: Same session +
output_id IS NULL+linked_output_ids IS NULL— backward compatibility
The response groups tests into { "üretim_başı": [...], "her_sepet_sonu": [...], "üretim_sonu": [...] } with a summary object containing counts per interval. This gives any consumer (Production page, QR scanner, reports) a complete quality picture for a single output.
7.16 CRUD Flow
CREATE (Test Request)
- Production operator starts a session, records an output, or ends a session
- Production backend reads
work_card.material_details.tests - Filters by the current interval’s frequency
- Calls
POST /api/production/test-requests/create-for-interval - For each matching test: creates
ProductionTestRequestwith statusPENDING - Sends high-priority notifications to all
lab_userandsuper_admin(excluding requester) - New request appears in Lab queue within 5 seconds
READ (Test Queue & Results)
- Lab page loads →
GET /api/production/test-requests/all?limit=100 - Backend eager-loads 5 relationships, resolves machine names
- Polls every 5 seconds for updates
- Client-side search filters across 10+ fields without additional API calls
- Individual test details:
GET /api/production/test-requests/{id}with full parameter list - Bonded tests per output:
GET /bonded-tests/output/{id}grouped by interval
UPDATE (Test Completion)
- Lab user clicks experiment icon on pending test
- System fetches full test standard with parameters from
GET /api/teknik/standards/tests/{id}/full - Dynamic form renders: samples × parameters → InputNumber fields
- Lab user fills measurements, adds optional notes
- Clicks “Onaylandı” (passed) or “Başarısız” (failed)
POST /api/production/test-requests/{id}/completewith{ passed, measurements, notes }- Backend validates: must be pending, user must be lab_user/super_admin
- Updates status, records
tested_by,tested_at, stores measurement JSON - If failed: creates retry request atomically (retry_count + 1, parent_request_id set)
- Sends notifications to requester + super admins (medium priority for pass, high for fail)
- Frontend refreshes table, modal closes
DELETE
- Super admin clicks trash icon on any test request
Popconfirm: “Silmek istediğinize emin misiniz?”- Confirmed →
DELETE /api/production/test-requests/{id} - Backend: verifies
super_admin, deletes record - Frontend: optimistically removes row from state
Why dynamic forms matter: A cable factory tests dozens of different standards — from IEC 60228 conductor resistance to UL flame tests to custom SLN factory specs. Each standard has different parameters, different sample counts, and different units. Hardcoding forms for each standard would be unmaintainable. Instead, the system reads the test standard definition (stored in the Teknik module) and generates the form at runtime. Adding a new test standard with new parameters requires zero frontend changes — the form adapts automatically. This is what makes the Lab module scalable to any number of quality standards.
8. CONCLUSION
The Laboratory module is deceptively simple in scope — two pages, five tables, fifteen endpoints — but it carries an outsized responsibility. It is the only system that can say “yes, this cable meets the standard” or “no, stop and retest.” Without it, production outputs are unverified metal.
Three design decisions define this module:
- Tests are defined in Teknik, triggered by Production, executed by Lab. This separation of concerns means that no one can skip a required test — the test requirements are embedded in the cable design itself and automatically enforced at each production interval.
- Forms are generated from data, not coded. The dynamic form system means adding a new test standard (with new parameters, units, and sample counts) requires zero frontend changes. The form renders from the database definition, making the system scalable to any number of quality standards.
- Failed tests create their own successors. The auto-retry mechanism with
parent_request_idchaining ensures that a failed test is never forgotten — a new pending request immediately appears in the queue, and the full retry history is preserved for audit.
The module currently operates with polling (5-second intervals) rather than WebSockets, and the Dashboard page uses mock data. These are conscious decisions that prioritize simplicity and correctness in the early deployment phase. The architecture is ready for real-time upgrades and live dashboard integration when the factory’s testing volume demands it.
Together with Teknik (standards), Production (triggers), and Admin (permissions), the Lab module completes the quality assurance loop that allows the factory to prove compliance to IEC-EN, UL, and custom SLN standards — from raw material to finished cable.