Back to Modules

LABORATORY — QUALITY TESTING & COMPLIANCE

The quality gatekeeper of the factory. Every cable produced must pass through lab testing at defined intervals before it can move to the next production step or ship to a customer. The Lab module bridges Teknik (test standards), Production (test requests), and Quality (pass/fail with measurements).

February 2026 • Solen Kablo • Living Document

The Laboratory module is the factory’s quality conscience. It does not create work — it receives work from Production in the form of test requests that are automatically generated at three critical intervals: production start, after every basket/reel output, and production end. These intervals are not arbitrary; they are defined in the cable design during the Teknik phase and embedded into work cards. When a test request arrives, a lab user opens a dynamically-generated measurement form whose fields, units, sample counts, and parameters are all pulled from the test standard — no two test forms look alike. The lab user records measurements, marks the test as passed or failed, and the system notifies the production operator. If a test fails, the system automatically creates a retry request. Tests are bonded to specific production outputs, creating full traceability from a finished cable reel back to every quality check it underwent. The Lab module is the reason the factory can prove compliance to IEC-EN, UL, and custom SLN standards.

2
PAGES
~20
API ENDPOINTS
5
DATABASE TABLES
1900+
LINES OF CODE

TABLE OF CONTENTS

1. What Laboratory Does 2. The Data Flow 3. The Database Layer 4. The Backend Architecture 5. The Frontend 6. Lab Panel (Dashboard) 7. Test Yönetimi (Test Management) 8. Conclusion

1. WHAT LABORATORY DOES

The Laboratory module answers four questions that the entire production system depends on:

  1. “Does this material meet the standard?” — Every cable design specifies which tests must be performed and at what frequency. The Lab executes these tests and records quantitative measurements against defined parameters.
  2. “Can production continue?” — Production pages poll test status every 5 seconds. If tests for a given interval are still pending or have failed, the operator sees the status. Passing all tests clears the path for the next step.
  3. “What happened when a test failed?” — Failed tests automatically create retry requests. The retry chain is tracked via parent_request_id, so the system knows this is the 2nd or 3rd attempt at the same test for the same output.
  4. “Can we prove compliance?” — Every test result is permanently linked to the specific production output (basket/reel) it covers. Tests are bonded to outputs either directly (output_id) or at the lot level (linked_output_ids). This creates an unbroken traceability chain from finished product back through every quality check.

Key architectural insight: The Lab module does not define what tests to run — that comes from the Teknik module’s test standards and the cable design. The Lab module does not decide when to run them — that comes from the Production module which creates test requests at design-specified intervals. The Lab module’s sole responsibility is execution and recording: receive a test request, generate the measurement form, capture results, and report back. This separation makes it impossible for production to skip a required test, because the tests are embedded in the design itself.

2. THE DATA FLOW

The Lab module sits at the intersection of three other modules. Data flows into Lab from Teknik and Production, and results flow out back to Production and the notification system.

2.1 The Test Request Lifecycle

Cable Design (Teknik) Tests embedded in Work Card Production starts session üretim_başı tests created
Output recorded (basket) her_sepet_sonu tests created Notification → Lab users Lab executes test
Production ends session üretim_sonu tests created All tests bonded to outputs Full traceability

2.2 The Three Test Intervals

IntervalTurkish NameWhen CreatedScopeBonding
üretim_başıÜretim BaşıSession startEntire production lotLinked to first output (or all outputs via linked_output_ids)
her_sepet_sonuHer Sepet SonuAfter each outputSpecific basket/reelDirectly linked via output_id
üretim_sonuÜretim SonuSession endEntire production lotLinked to last output (or all outputs via linked_output_ids)

These intervals are not hard-coded in the Lab module. They come from the cable design’s material_details.tests[].frequency field, which supports comma-separated values (e.g., "üretim_başı,her_sepet_sonu" means a test runs both at start and after every basket).

2.3 What Happens When a Test Fails

Lab marks test FAILED System creates retry request retry_count + 1, parent_request_id set Notification → Operator + Admins

The retry mechanism is automatic. When a lab user marks a test as failed, the backend immediately creates a new ProductionTestRequest with retry_count = original + 1 and parent_request_id = original.id. This chains retries together for full audit history. The new request appears in the Lab’s pending queue, and the production operator receives a high-priority notification.

2.4 The Notification Bridge

EventRecipientsPriorityDeep Link
Test request createdAll lab_user + super_admin (excludes requester)High/lab/test with session_id, work_card_id
Test passedRequester + super_admin (excludes tester)Medium/lab/test with test_request_id
Test failedRequester + super_admin (excludes tester)High/lab/test with test_request_id, retry_request_id

3. THE DATABASE LAYER

The Lab module uses 5 tables. Two are the core test request tables (legacy + production-integrated), and three come from the Teknik module’s test standards system.

3.1 production_test_requests (20 columns) — The Primary Table

This is the real workhorse. Every test request created by Production lives here.

ColumnTypeDetails
idInteger PKAuto-increment, indexed
work_card_idInteger FKReferences work_cards.id, indexed
session_idInteger FKReferences production_sessions.id, indexed
output_idInteger FKReferences production_outputs.id — set for her_sepet_sonu tests
linked_output_idsJSONArray of output IDs for lot-level tests (üretim_başı, üretim_sonu)
test_standard_idInteger FKReferences test_standards.id
machine_typeString(50)Which machine: kabatel_cekme, kalaylama, incetel_cekme, buncher, extruder, ebeam
test_intervalString(50)üretim_başı / her_sepet_sonu / üretim_sonu
input_slotInteger0 or 1 for parallel production mode
retry_countIntegerDefault 0. Incremented on each retry.
parent_request_idInteger FKSelf-reference — points to original failed test
statusEnumPENDING / PASSED / FAILED
measurementsJSONStructure: { samples: [{ sample_num, params: { name: value } }] }
requested_byInteger FKUser who triggered creation (usually production operator)
requested_atDateTimeUTC
tested_byInteger FKLab user who completed the test
tested_atDateTimeWhen test was completed
notesTextLab user notes
created_atDateTimeUTC
updated_atDateTimeAuto-updated

3.2 test_requests (15 columns) — Legacy Table

The original test request system, still operational for backward compatibility. Uses material_id and material_qr instead of work card/session linkage. Status enum differs: test_bekleniyor, test_basarili, test_basarisiz, iptal. The production-integrated production_test_requests table is the primary system going forward.

3.3 test_standards (from Teknik module)

Defines what to test. Each standard has a category (IEC-EN, UL, SLN), test name, number of required samples, test method, and a set of parameters. Lab reads this to generate dynamic measurement forms.

ColumnDetails
standard_categoryIEC-EN, UL, or SLN (custom factory standard)
test_name / test_name_enTurkish and English test names
standard_numberReference standard (e.g., “IEC 60228”)
test_samplesNumber of samples required (≥ 1)
test_methodDescription of how to perform the test

3.4 test_parameters (from Teknik module)

Each test standard has one or more parameters. These are the actual measurement fields that appear in the lab’s dynamic form.

ColumnDetails
parameter_namee.g., “Diameter”, “Resistance”, “Tensile Strength”
parameter_unite.g., “mm”, “Ω/km”, “N/mm²”
parameter_orderDisplay order in the form
is_requiredWhether measurement is mandatory

3.5 test_results (from Teknik module)

Stores historical test results linked to cable batches. Includes pass_fail status (PASS/FAIL/PENDING), measurements JSON, and links to the test standard and production session.

Dual-system architecture: The database has both test_requests (legacy, material-based) and production_test_requests (current, production-based). The legacy system was the first implementation, linking tests to raw materials. The production system evolved to link tests to work cards, sessions, and specific outputs. Both coexist — the legacy endpoints are still functional but the production-integrated system is what drives the factory floor today.

4. THE BACKEND ARCHITECTURE

4.1 Route Files

FileLinesPrefixEndpointsPurpose
test_routes.py318/api/test-requests6Legacy test request CRUD. Create, list, get, complete, update, delete.
test_integration_routes.py745/api/production/test-requests9Production-integrated test system. Create for interval, check status, pending, all, details, complete, delete, bonded tests by output ID and output code.

4.2 API Contract — Production Test Endpoints (9)

MethodPathPermissionWhat It Does
POST/api/production/test-requests/create-for-intervalauthenticatedCreate test requests for a specific interval. Reads tests from work card’s material_details.tests, filters by frequency, creates ProductionTestRequest per matching test. Sends notifications to lab users.
GET/api/production/test-requests/check-status/{session_id}/{interval}authenticatedCheck if all tests for an interval are passed. Returns all_passed boolean, pending/failed/passed counts, and lists of pending/failed tests. Used by production frontend for status polling.
GET/api/production/test-requests/pending/{session_id}authenticatedGet all pending tests for a session. Eager-loads test_standard and requester.
GET/api/production/test-requests/allauthenticatedGet all production test requests (for Lab page). Includes machine names, test standard info. Filterable by status. Limit default 50.
GET/api/production/test-requests/{test_id}authenticatedGet a specific test request with full details: test standard with sorted parameters, work card info.
POST/api/production/test-requests/{test_id}/completelab_user, super_adminComplete a test. Sets status to PASSED or FAILED, stores measurements JSON, records tester. If failed: auto-creates retry request. Sends notifications.
DELETE/api/production/test-requests/{test_id}super_adminDelete a test request.
GET/api/production/test-requests/bonded-tests/output/{output_id}authenticatedGet all tests bonded to a specific output. Includes direct (output_id match), lot-level (linked_output_ids contains), and legacy (session match). Groups by interval.
GET/api/production/test-requests/bonded-tests/output-code/{output_code}authenticatedSame as above but by output code (e.g., X1, X2). Resolves code to ID internally.

4.3 Legacy Test Endpoints (6)

MethodPathPermissionWhat It Does
POST/api/test-requests/authenticatedCreate legacy test request (material-based).
GET/api/test-requests/authenticatedList legacy test requests with filters.
GET/api/test-requests/{id}authenticatedGet specific legacy test request.
POST/api/test-requests/{id}/completelab_user, super_adminComplete legacy test with pass/fail and results.
PUT/api/test-requests/{id}super_adminUpdate legacy test request.
DELETE/api/test-requests/{id}super_adminDelete legacy test request.

4.4 How Production Creates Test Requests

Test requests are not created by the Lab module. They are created by Production at three points in the session lifecycle:

  1. Session start (POST /api/production/start-session): reads work_card.material_details.tests, filters for üretim_başı frequency, creates one ProductionTestRequest per matching test (one per slot in parallel mode).
  2. Output recording (POST /api/production/record-output): filters for her_sepet_sonu frequency, creates test requests linked to the specific output_id.
  3. Session end (POST /api/production/end-session): filters for üretim_sonu frequency, creates test requests, then links lot-level tests (üretim_başı and üretim_sonu) to all relevant outputs via linked_output_ids.

4.5 The Bonded Tests System

Tests are bonded to production outputs in three ways, checked in this order:

  1. Direct bond: output_id matches the output — used for her_sepet_sonu tests that apply to a single basket/reel.
  2. Lot-level bond: session_id matches AND test_interval is üretim_başı or üretim_sonu — these lot-level tests apply to all outputs in the session.
  3. Legacy bond: session_id matches AND output_id IS NULL AND linked_output_ids IS NULL — backward compatibility for old test requests.

The GET /bonded-tests/output/{output_id} endpoint returns tests grouped by interval, giving a complete quality picture for any single output.

4.6 Parallel Production Support

Some machines (e.g., OTOMEC Kalaylama, CGN E-beam) support parallel processing with two input slots. The Lab module handles this through the input_slot field (0 or 1). At session start and end, one test request is created per slot for lot-level tests. Slot-specific outputs are linked via source_qr_codes in HalfProductStock.

5. THE FRONTEND

5.1 Page Structure

RouteComponentLinesPurpose
/lab/dashboardLab/Dashboard/index.tsx133Overview panel with statistics cards and alert/test tables. Currently uses mock data — will be connected to real APIs.
/lab/testLab/Test/index.tsx803The primary Lab page. ProTable of all production test requests with dynamic measurement forms, completion workflow, and result viewing.

5.2 The Test Page — Core Lab Interface

This is where lab users spend most of their time. It shows all production test requests in a ProTable with 9 columns and supports the complete test lifecycle.

ProTable Columns (9)

#ColumnWidthDetails
1Kod70px, fixed leftPurple tag: L{id}
2TestautoTest name + standard number
3İş Kartı80pxBlue tag: #{work_card_id}
4Makine150pxMachine name (Turkish)
5Aralık100pxInterval tag: Üretim Başı / Her Sepet / Üretim Sonu
6Talep130pxRequested at (DD.MM.YY HH:mm, Turkey timezone)
7Durum100pxStatus tag: Beklemede (orange), Onaylandı (green + check), Başarısız (red + X)
8Test Eden100pxTester name or “-”
9İşlemler140px, fixed rightConditional: “Test Başlat” (pending + lab_user), “Görüntüle” (completed), Delete (super_admin)

Dynamic Test Forms

When a lab user clicks “Test Başlat”, the system fetches the full test standard from GET /api/teknik/standards/tests/{id}/full including all parameters. The form is generated dynamically:

The measurement data is assembled into a structured JSON: { samples: [{ sample_num: 1, params: { "Diameter": 1.82, "Resistance": 9.45 } }] }

Real-Time Updates

The test list polls every 5 seconds via setInterval(fetchTestRequests, 5000). No WebSocket is used for the Lab page — polling was chosen for simplicity and reliability.

Result Viewing

Clicking “Görüntüle” on a completed test opens a read-only modal with a compact measurement table: column headers are parameter names (with units), rows are samples, cells are recorded values. Notes are shown below the table.

5.3 The Dashboard — Overview Panel

Currently shows 4 statistic cards (Pending Alerts, Active Tests, Completed Tests, Today’s Tests) and two tables (Pending Lab Alerts, Active Test Processes). All data is currently mock/hardcoded — this is a prepared shell that will be connected to real APIs as the Lab module matures.

5.4 Client-Side Search

The Test page implements comprehensive client-side search filtering across: test code (L{id}), test name, standard number, work card ID, machine name/type, interval text, tester name, and status (in Turkish).

5.5 Permission System

PageRoute GuardButtons in Manifest
Lab DashboardcanLabaccess_page, view_stats
Lab TestcanLabaccess_page, create_test, view_tests

User type checks in the Test page: isLabUser (lab_user or super_admin) can start/complete tests. isSuperAdmin can delete test requests. Regular users have read-only access.

6. LAB PANEL (DASHBOARD)

6.1 Purpose

The Lab Panel is the landing page for laboratory users at /lab/dashboard. Its purpose is to provide an at-a-glance overview of the lab’s workload: how many alerts are pending, how many tests are active, how many were completed, and what happened today. It is the first thing a lab user sees after logging in.

6.2 Current Status — Prepared Shell

The Dashboard is currently a prepared shell with mock data. All statistics and table rows are hardcoded — no API calls are made. This is intentional: the shell was built first to define the layout and information architecture, and will be connected to real endpoints as the Lab module matures. The component is 133 lines of clean React code, fully styled and ready for integration.

6.3 Layout & Components

The page uses Ant Design’s grid system with a PageContainer wrapper that displays the title “Lab Kontrol Paneli” and a personalized greeting showing the logged-in user’s name.

Statistics Row (4 cards)

Four Statistic cards in a responsive 4-column grid (Col lg={6}), each with an icon and color-coded value:

#Card TitleMock ValueIconColor
1Bekleyen Uyarılar (Pending Alerts)2AlertOutlinedError (red)
2Aktif Testler (Active Tests)2ExperimentOutlinedWarning (orange)
3Tamamlanan Testler (Completed Tests)15CheckCircleOutlinedSuccess (green)
4Bugünkü Testler (Today’s Tests)8ClockCircleOutlinedPrimary (blue)

Data Tables (2 side-by-side)

Below the stats, two tables sit in a responsive 2-column layout (Col lg={12}):

TableCard HeaderBadgeColumnsMock Rows
Bekleyen Lab UyarılarıPending Lab AlertsAcil (red tag)Makine, Uyarı, Saat, İşlem2 rows: Kabatel Çekme 01 (new production alert), Kalaylama 01 (slow mode approval)
Aktif Test SüreçleriActive Test ProcessesDevam Ediyor (processing tag)Test Kodu, Ürün, Durum, Öncelik, İşlem2 rows: SLHT001 - 1.8mm Tel (testing, high priority), SLDT002 - Kalay Tel (awaiting approval, normal)

Each table has an action button column: “Test Başlat” for alerts, “Onayla” for active tests. These buttons are currently non-functional placeholders.

6.4 Authentication Integration

Despite being mock, the Dashboard already integrates with the real auth system. It calls RealAuthService.getCurrentUser() to display the logged-in user’s name in the subtitle. The route is guarded by the canLab permission in the route configuration.

6.5 Roadmap

When connected to real APIs, the Dashboard will pull live data from:

Why ship a mock page? In a factory ERP, the dashboard layout is critical for user adoption. Building the shell first — with realistic mock data and the exact card/table structure — allows stakeholders to validate the information hierarchy before any backend work is connected. The 133-line component is intentionally minimal so that swapping mock data for real API calls is a straightforward integration task.

7. TEST YÖNETİMİ (TEST MANAGEMENT)

7.1 Purpose & Scale

Test Yönetimi is the operational heart of the Laboratory module. Located at /lab/test, it is where lab users receive, execute, and record every quality test the factory runs. The frontend component is 803 lines of React/TypeScript, the backend route file is 745 lines of Python/FastAPI, and together they handle 9 production-integrated API endpoints plus 6 legacy endpoints. Every interaction between Production and the Lab flows through this page.

7.2 Page Title & Badge

The page title is “Test Talepleri” (Test Requests), rendered inside a PageContainer. Next to the title, a live Badge shows the count of pending tests. This badge updates every 5 seconds alongside the data, giving the lab user an immediate visual cue of their workload without scrolling.

7.3 The ProTable — 9-Column Test Queue

The core of the page is an Ant Design Pro ProTable configured with horizontal scroll (x: 1100), built-in pagination, density toggle, column visibility settings, and manual reload. The table fetches up to 100 records from GET /api/production/test-requests/all and displays them in 9 columns:

#ColumnWidthFixedRendering Details
1Kod70pxLeftPurple monospace Tag: L{id}. Unique identifier per test request.
2TestautoTwo lines: test name (bold) from test_standard.test_name, standard number below in smaller gray text.
3İş Kartı80pxBlue Tag: #{work_card_id}. Links the test to its production work card.
4Makine150pxMachine display name. Uses machine_name from backend (brand – model) or falls back to a client-side lookup mapping machine_type codes to Turkish names (Kabatel Çekme, Kalaylama, İncetel Çekme, Buncher, Extruder, E-Beam, Aktarma, Paletleme).
5Aralık100pxInterval Tag: maps üretim_başı → “Üretim Başı”, her_sepet_sonu → “Her Sepet”, üretim_sonu → “Üretim Sonu”.
6Talep130pxRequested timestamp formatted as DD.MM.YY HH:mm in Turkey timezone (Europe/Istanbul). Uses dayjs with UTC-to-local conversion.
7Durum100pxStatus tag: Beklemede (orange), Onaylandı (green + check icon), Başarısız (red + X icon).
8Test Eden100pxTester’s name if completed, or “–” if still pending.
9İşlemler140pxRightConditional buttons: ExperimentOutlined (start test — pending + lab_user only), EyeOutlined (view results — completed tests), DeleteOutlined with Popconfirm (super_admin only).

7.4 Client-Side Search

The toolbar renders a search input with SearchOutlined icon (200px wide, rounded). Search is entirely client-side using useMemo: it filters the full dataset against 10+ fields simultaneously:

7.5 Real-Time Polling

On mount, the component calls fetchTestRequests() and starts a setInterval at 5-second intervals. The cleanup function clears the interval on unmount. This provides near-real-time updates without WebSocket complexity. When a new test request arrives from Production, it appears in the table within 5 seconds.

7.6 Notification Deep-Link Support

The component reads URL query parameters on mount via useLocation(). When a lab user clicks a notification (e.g., “Test Talebi: Direnç Testi”), they are navigated to /lab/test?test_request_id=42. The page displays an info message: “Test talebi görüntüleniyor: #42”. This creates a seamless flow from notification to action.

7.7 The Dynamic Test Form

This is the most technically sophisticated part of the Lab module. When a lab user clicks the experiment icon on a pending test, a centered Modal (700px wide, blurred backdrop) opens. The modal title shows the test name with a subtitle: “İş Kartı #{work_card_id} • {interval}”. The form is generated entirely from the test standard’s parameter definitions.

Step 1: Fetch Test Standard

The system calls GET /api/teknik/standards/tests/{id}/full to retrieve the complete test standard including all parameters sorted by parameter_order. While loading, a Spin component with “Test standardı yükleniyor...” is displayed.

Step 2: Generate Form Fields

The form generation algorithm iterates over test_samples (number of samples) × parameters (measurement fields):

For each sample (1 to test_samples):
  If multiple samples → render Divider "Numune {n}"
  For each parameter in test_standard.parameters:
    Render InputNumber with:
      • name: "sample_{s}_param_{param.id}"
      • label: parameter_name (e.g. "Diameter", "Resistance")
      • addonAfter: parameter_unit (e.g. "mm", "Ω/km")
      • placeholder: "Hedef: {target_value}" if defined
      • min/max bounds from parameter definition
      • step: 0.01
      • required flag from is_required
      • flex layout: 200px min-width, wrapping

This means a test standard with 3 parameters and 2 samples generates 6 input fields across 2 sample groups. A standard with 1 parameter and 1 sample generates a single field. The form always adapts — no two test forms look alike.

Step 3: Edge Cases

Step 4: Notes Field

Below all measurement fields, a TextArea (2 rows) labeled “Notlar” with placeholder “Test ile ilgili notlar...” allows the lab user to add observations.

Step 5: Action Buttons

Three buttons in a flex row at the bottom:

ButtonStyleIconAction
İptalDefaultCloses modal, resets form
BaşarısızDanger (red)CloseCircleOutlinedCalls handleTestComplete(false)
OnaylandıPrimary (green, uses token.colorSuccess)CheckCircleOutlinedCalls handleTestComplete(true)

7.8 Test Completion & Measurement Assembly

When the lab user clicks either “Başarısız” or “Onaylandı”, the system:

  1. Validates all required form fields via form.validateFields()
  2. Assembles the measurement JSON from the dynamic field names:
    {
      "samples": [
        { "sample_num": 1, "params": { "Diameter": 1.82, "Resistance": 9.45 } },
        { "sample_num": 2, "params": { "Diameter": 1.81, "Resistance": 9.52 } }
      ]
    }
  3. Sends POST /api/production/test-requests/{id}/complete with { passed: true/false, measurements: {...}, notes: "..." }
  4. On success: shows “Test onaylandı” or “Test başarısız”, closes modal, resets form, refreshes the table

7.9 Auto-Retry on Failure

When the backend receives a passed: false completion, it does three things atomically in a single database transaction:

  1. Updates the original: Sets status to FAILED, records tester, timestamp, and measurements
  2. Creates a retry: A new ProductionTestRequest with identical work_card_id, session_id, output_id, test_standard_id, machine_type, and test_interval — but with retry_count = original + 1 and parent_request_id = original.id
  3. Sends notifications: High-priority notifications to the original requester and all super admins, with deep link to the test page and retry_request_id in extra data

The retry request appears in the Lab queue as a new pending test within 5 seconds (next poll cycle). The API response includes retry_created: true and retry_request_id so the frontend knows a retry was spawned.

7.10 Viewing Completed Test Results

For completed tests (passed or failed), the eye icon opens a read-only Modal (480px wide, blurred backdrop). The title shows the test name with a pass/fail tag, plus metadata: L{id} • #{work_card_id} • DD.MM HH:mm.

The result display renders a compact measurement table:

7.11 Delete Functionality

Only super_admin users see the delete button (red trash icon). Clicking it shows a Popconfirm: “Silmek istediğinize emin misiniz?” with Evet/Hayır options. Confirmed deletions call DELETE /api/production/test-requests/{id} and optimistically remove the row from state.

7.12 Permission Model

ActionRequired RoleImplementation
View test listAny authenticated user with canLabRoute guard in routes.ts
Start / complete a testlab_user or super_adminFrontend: isLabUser check hides button. Backend: user_type not in [’lab_user’, ’super_admin’] returns 403.
Delete a test requestsuper_adminFrontend: isSuperAdmin check hides button. Backend: user_type != ’super_admin’ returns 403.

7.13 Backend: How Production Creates Test Requests

Test requests originate from the Production module, not from Lab. The POST /api/production/test-requests/create-for-interval endpoint is called at three lifecycle points:

Session Start Read work_card.material_details.tests Filter by üretim_başı frequency Create ProductionTestRequest per match
Output Recorded Filter by her_sepet_sonu frequency Create with output_id linked Notify all lab_user + super_admin
Session End Filter by üretim_sonu frequency Link lot tests via linked_output_ids Full traceability established

The frequency field supports comma-separated values (e.g., "üretim_başı,her_sepet_sonu"), meaning a single test can be triggered at multiple intervals. For each matching test, the system verifies the TestStandard exists in the database before creating the request.

7.14 Backend: The “All Tests” Endpoint

The GET /api/production/test-requests/all endpoint powers the Lab page. It eager-loads 5 relationships (test_standard, work_card, session, requester, tester) in a single query, supports status filtering, limits to 50 by default (frontend requests 100), and resolves machine names by looking up the session’s machine_id against the correct machine table for the machine_type. The response also includes a global pending_count for the title badge.

7.15 Backend: The Bonded Tests System

Two endpoints serve bonded test queries — by output ID and by output code (e.g., X1, X2). They return all tests that apply to a specific production output, grouped by interval. The bonding logic checks three levels:

  1. Direct bond: output_id matches — her_sepet_sonu tests tied to a specific basket/reel
  2. Lot-level bond: Same session + interval is üretim_başı or üretim_sonu — these lot-level tests apply to every output in the session
  3. Legacy bond: Same session + output_id IS NULL + linked_output_ids IS NULL — backward compatibility

The response groups tests into { "üretim_başı": [...], "her_sepet_sonu": [...], "üretim_sonu": [...] } with a summary object containing counts per interval. This gives any consumer (Production page, QR scanner, reports) a complete quality picture for a single output.

7.16 CRUD Flow

CREATE (Test Request)

  1. Production operator starts a session, records an output, or ends a session
  2. Production backend reads work_card.material_details.tests
  3. Filters by the current interval’s frequency
  4. Calls POST /api/production/test-requests/create-for-interval
  5. For each matching test: creates ProductionTestRequest with status PENDING
  6. Sends high-priority notifications to all lab_user and super_admin (excluding requester)
  7. New request appears in Lab queue within 5 seconds

READ (Test Queue & Results)

  1. Lab page loads → GET /api/production/test-requests/all?limit=100
  2. Backend eager-loads 5 relationships, resolves machine names
  3. Polls every 5 seconds for updates
  4. Client-side search filters across 10+ fields without additional API calls
  5. Individual test details: GET /api/production/test-requests/{id} with full parameter list
  6. Bonded tests per output: GET /bonded-tests/output/{id} grouped by interval

UPDATE (Test Completion)

  1. Lab user clicks experiment icon on pending test
  2. System fetches full test standard with parameters from GET /api/teknik/standards/tests/{id}/full
  3. Dynamic form renders: samples × parameters → InputNumber fields
  4. Lab user fills measurements, adds optional notes
  5. Clicks “Onaylandı” (passed) or “Başarısız” (failed)
  6. POST /api/production/test-requests/{id}/complete with { passed, measurements, notes }
  7. Backend validates: must be pending, user must be lab_user/super_admin
  8. Updates status, records tested_by, tested_at, stores measurement JSON
  9. If failed: creates retry request atomically (retry_count + 1, parent_request_id set)
  10. Sends notifications to requester + super admins (medium priority for pass, high for fail)
  11. Frontend refreshes table, modal closes

DELETE

  1. Super admin clicks trash icon on any test request
  2. Popconfirm: “Silmek istediğinize emin misiniz?”
  3. Confirmed → DELETE /api/production/test-requests/{id}
  4. Backend: verifies super_admin, deletes record
  5. Frontend: optimistically removes row from state

Why dynamic forms matter: A cable factory tests dozens of different standards — from IEC 60228 conductor resistance to UL flame tests to custom SLN factory specs. Each standard has different parameters, different sample counts, and different units. Hardcoding forms for each standard would be unmaintainable. Instead, the system reads the test standard definition (stored in the Teknik module) and generates the form at runtime. Adding a new test standard with new parameters requires zero frontend changes — the form adapts automatically. This is what makes the Lab module scalable to any number of quality standards.

8. CONCLUSION

The Laboratory module is deceptively simple in scope — two pages, five tables, fifteen endpoints — but it carries an outsized responsibility. It is the only system that can say “yes, this cable meets the standard” or “no, stop and retest.” Without it, production outputs are unverified metal.

Three design decisions define this module:

  1. Tests are defined in Teknik, triggered by Production, executed by Lab. This separation of concerns means that no one can skip a required test — the test requirements are embedded in the cable design itself and automatically enforced at each production interval.
  2. Forms are generated from data, not coded. The dynamic form system means adding a new test standard (with new parameters, units, and sample counts) requires zero frontend changes. The form renders from the database definition, making the system scalable to any number of quality standards.
  3. Failed tests create their own successors. The auto-retry mechanism with parent_request_id chaining ensures that a failed test is never forgotten — a new pending request immediately appears in the queue, and the full retry history is preserved for audit.

The module currently operates with polling (5-second intervals) rather than WebSockets, and the Dashboard page uses mock data. These are conscious decisions that prioritize simplicity and correctness in the early deployment phase. The architecture is ready for real-time upgrades and live dashboard integration when the factory’s testing volume demands it.

Together with Teknik (standards), Production (triggers), and Admin (permissions), the Lab module completes the quality assurance loop that allows the factory to prove compliance to IEC-EN, UL, and custom SLN standards — from raw material to finished cable.