Symptom
Name the user-visible problem: slow checkout, dashboard timeouts, batch lag, lock waits, cloud cost pressure, or recurring incident risk.
How it works
This workflow explains how a broad database optimization audit turns a slow workflow, timeout, cost spike, or capacity concern into evidence, scope, review, recommendations, and a human-approved action plan.
Process
Each gate narrows uncertainty before anyone treats a recommendation as production-ready.
Name the user-visible problem: slow checkout, dashboard timeouts, batch lag, lock waits, cloud cost pressure, or recurring incident risk.
Collect safe signals such as slow query examples, plans, query statistics, index context, table sizes, wait notes, hosting limits, and recent deploy history.
Separate the likely database problem from application code, infrastructure, data growth, release timing, and business constraints that shape the review.
A human reviewer checks the evidence, confidence level, missing context, production risk, and whether PostgreSQL should enter the specialist path.
The output is a ranked set of next steps: deeper evidence, staging tests, query rewrites, index candidates, maintenance checks, or configuration review points.
Production work waits for your owner, test plan, rollback path, maintenance window, and explicit approval. The audit does not approve itself.
Boundaries
The site is meant to support safer decisions, not become an unrestricted database operator.
| Boundary | What it means |
|---|---|
| Read-only first | The first step should use metadata, statistics, query plans, logs, and sanitized examples. Broad production write access is not required for scoping. |
| Human review | Findings are reviewed before recommendations are treated as useful. Confidence, impact, reversibility, and missing evidence are part of the decision. |
| No automatic production writes | The workflow does not create indexes, rewrite deployed SQL, update data, run migrations, change configuration, or schedule maintenance automatically. |
| No hosted storage on this page | This static page does not provide uploads, accounts, hosted collector storage, or report storage, so it does not save user database content, credentials, queries, files, or customer data. PostgreSQL specialist workflows are downstream paths with their own evidence-sharing rules. |
| Approval required | Any index, SQL, configuration, vacuum, migration, or rollout action belongs to your team's approval and change-management process. |
Review the read-only database audit boundary before sharing evidence, credentials, or production context.
Where to go
The workflow is broad by design. It helps MySQL, SQL Server, PostgreSQL, and other database teams frame the same decision path without claiming every engine has a formal service workflow here today.
Use the audit page to turn symptom, evidence, scope, and safety limits into a clear request for review.
Open database optimization auditUse the checklist to prepare the business symptom, database context, query evidence, safety constraints, and approval path.
Open preparation checklistUse the sample report to see how evidence, findings, risk notes, recommendations, and approval boundaries should be presented.
Open sample reportPostgreSQL routing
If your evidence points to PostgreSQL, use this broad workflow to frame the issue, then continue into the PostgreSQL pages for the current specialist collector and sample report path.
Describe the symptom, share safe evidence, define production boundaries, and confirm who approves any database change.
Move to PostgreSQL when the engine, evidence, and next step fit that specialist route instead of a generic database optimization explanation.
Start with the broad audit scope, prepare inputs with the checklist, or compare expectations against the sample report. PostgreSQL teams can continue to the PostgreSQL audit hub.