How The Backend Team Uses ZippCRM
From the internal operations point of view, ZippCRM is the primary operating console. The backend team captures new business, converts leads, launches matters, routes work to teams, requests evidence, reviews uploads, approves effort, and publishes compliance updates.
flowchart TD
A["Internal user"] --> B["Mission Control"]
B --> C["Control Desk"]
C --> C1["Lead"]
C1 --> C2["Client"]
C2 --> C3["Project"]
B --> D["Service Desk"]
D --> D1["Assign to team"]
D1 --> D2["Manager routes to consultant"]
D2 --> D3["Consultant logs time"]
D3 --> D4["Manager / reviewer approves"]
B --> E["Regulatory Library"]
E --> E1["Guidance"]
E1 --> E2["Impact"]
E2 --> E3["Workflow rule review / publish"]
B --> F["Ops Center"]
F --> F1["Email outbox"]
F --> F2["Notifications"]
F --> F3["Upload review"]
B --> G["AI Copilot"]
G --> G1["Chat"]
G --> G2["Action drafts"]
What backend teams run here
- Commercial intake and qualification.
- Client and project setup with workflow and billing context.
- Assignment routing, document requests, approvals, and audit logging.
What backend teams control
- Who can move a matter ahead.
- Which evidence is missing.
- Which consultant and team own each work item.
Why this matters internally
- Nothing depends on inbox memory alone.
- Delivery and compliance stay in one system.
- Customer communication, evidence, and audit history remain connected.
How The Customer Experiences ZippCRM
From the customer point of view, ZippCRM is not an internal CRM. It is the portal where they receive credentials, understand what is pending, upload required files, and track work progress without repeatedly chasing the backend team.
1. Portal onboarding
- Customer email is captured at lead or client stage.
- ZippCRM creates a portal user for that customer.
- Credentials are queued or sent from the system.
2. Matter visibility
- Customer sees their projects and any standalone service items relevant to them.
- Status, deadlines, and requested actions are visible in one place.
- Approved effort summaries can be surfaced for transparency.
3. Document response
- Backend team raises a document request from ZippCRM.
- Customer receives the request and uploads the file through the portal.
- The upload moves into the internal review queue.
4. Ongoing confidence
- Customer does not have to ask what is pending.
- Customer does not have to ask where to send files.
- Customer sees a structured engagement instead of fragmented mail threads.
| Business object | Key captured data | Why it matters |
|---|---|---|
| Portal credentials | Portal email, contact name, password setup | Brings the customer into the same operating system as the backend team |
| Status visibility | Projects, standalone service items, progress, deadlines | Reduces manual follow-up and confusion |
| Document response | Requests, uploads, review state | Moves evidence collection into a controlled loop |
| Effort visibility | Approved time and monthly statement export | Improves transparency around work done |
How Delivery Runs Between Backend And Customer
This is the shared operating model. Internally, the team works mainly through projects, with Service Desk reserved for non-project work. Externally, the customer sees status, requests, uploads, and approved effort. ZippCRM connects both sides without losing control.
flowchart TD
A["Lead / Client"] --> B["Regulatory Project"]
A --> C["Assignment"]
B --> D["Onboarding -> Documentation -> Filing -> Liaison -> Issue"]
D --> E["Checklist verification + document vault + billing milestones"]
C --> F["Assigned to team"]
F --> G["Team manager routes to consultant"]
G --> H["Consultant logs time"]
H --> I["Manager / reviewer approves effort"]
I --> J["Client sees summary and statement"]
Regulated project path
- Each workflow family uses the same stage-gate frame with different checklist content.
- Mandatory evidence must be verified before stage progression.
- Document requests, liaison logs, expiry alerts, and milestone billing live inside the project record.
Assignment path
- Used for accounting, GST, legal, secretarial, advisory, banking, and ad hoc regulator-linked work.
- The standalone service item is owned by a team first, then routed by the team manager to named consultants.
- Multiple time entries, billable flags, and monthly statements are supported.
Customer collaboration path
- Customer receives requests from ZippCRM instead of informal follow-up.
- Customer uploads into the portal instead of replying with attachments everywhere.
- Backend team reviews, accepts, and continues execution from the same system.
Shared Controls Between Backend And Customer
The backend team and the customer do not use the product in the same way, but they depend on the same control layer: permissions, document handling, notifications, audit history, and governed compliance updates.
| Control | Backend view | Customer view |
|---|---|---|
| RBAC | Internal roles govern write, review, billing, and user management. | Customer sees only their own records and upload actions. |
| Document vault | Backend team stores, reviews, and manages matter evidence. | Customer uploads directly into the right record. |
| Notifications | Ops center shows email, uploads, and internal events. | Customer gets a clearer request-and-response loop. |
| Audit + governance | Backend team gets operational accountability and regulated change control. | Customer benefits from a more reliable and traceable service process. |
Current Runtime Baseline
This is the practical technical baseline that supports both the backend team workflow and the customer portal experience. All systems are confirmed operational as of April 2026.
Frontend
- Mission Control, Leads, Projects, Service Desk, Regulatory Library, Ops Center, AI Copilot, User Admin, and Workflow Atlas.
- Readable control-desk forms with field guidance and inline validation states.
- Role-aware views for internal teams and client users. Login field is
email(not username).
Backend
- Python API with workflows, leads, clients, projects, service items, timesheets, notifications, AI settings, and audit logging.
- PostgreSQL 16 as the system of record for operational, security, and governance entities.
- Persistent document file storage on a dedicated volume; optional S3/MinIO override via environment variables.
Operational engine
- Workflow templates for RBI, SEBI, and IRDAI seeded at boot.
- Assignment desk supports team-owned service work and consultant allocations.
- Stage progression and effort approval both have controlled review paths.
Client and governance controls
- Client portal users are created from contact email and can receive credentials from ZippCRM.
- Regulatory library supports draft, review, approve, and publish flows for workflow changes.
- AI Copilot uses shared provider settings and action drafts for operational acceleration.
Deployment Options
ZippCRM can be deployed on Docker (recommended for getting started), Kubernetes (production scale), or directly on Linux or Windows servers (bare metal / VM).
🐳 Docker (Recommended)
- Three containers:
postgres,backend,frontend. Starts with a singledocker compose up -d. - All config lives in
.env— this file takes precedence overdocker-compose.ymldefaults. Always update both when changing keys. - Ports: Frontend on
3002, Backend API on8088, Postgres on5436. - Upgrade: replace image, run
docker compose up -d --build. Data volumes persist across rebuilds.
☸️ Kubernetes / Helm
- Deploy backend as a
Deploymentwith ≥2 replicas behind aClusterIPService. UseHorizontalPodAutoscaler(CPU 70%) for elastic scale. - All secrets injected via a Kubernetes
Secretobject — never bake credentials into the image. - Scheduler isolation: Set
SCHEDULER_ENABLED=trueon exactly one replica (dedicated single-replica Deployment). Running the scheduler on every pod causes duplicate job execution. - PostgreSQL should use an external managed service (AWS RDS, GCP CloudSQL) in production. Use an
ExternalNameService or directDATABASE_URLsecret. - Use
PersistentVolumeClaimwithReadWriteMany(NFS/EFS) for shared document storage across replicas, or switch to S3-compatible object storage via theS3_*env vars. - Ingress: configure TLS termination at the Ingress controller (nginx-ingress or AWS ALB). Set
PORTAL_URLto your public HTTPS domain.
🐧 Bare Metal — Linux (Ubuntu 22.04)
- Requirements: Python 3.12, PostgreSQL 16, Nginx, Certbot. Install via
apt+ deadsnakes PPA. - Run backend as a
systemdservice under a dedicatedzippcrmsystem user. SetRestart=on-failureandRestartSec=5. - Nginx reverse-proxies
localhost:8080(backend) and serves thefrontend/static files. Enable TLS withcertbot --nginx. - All environment variables set in the
[Service] Environment=block of the systemd unit file, or via anEnvironmentFile=pointing to/etc/zippcrm/.env. - Database connection:
DATABASE_URL=postgresql://zippcrm:zippcrm@localhost:5432/zippcrm.
🪟 Bare Metal — Windows Server
- Requirements: Python 3.12 (winstore or python.org), PostgreSQL 16, Nginx for Windows. Easiest via Chocolatey:
choco install postgresql16 python312 nginx. - Run backend as a Windows Service using NSSM (Non-Sucking Service Manager):
nssm install ZippCRMBackend python app.py. Set startup directory, stdout/stderr log paths, and environment variables in the NSSM GUI. - Nginx serves static frontend files and reverse-proxies the API. Place config in
C:\nginx\conf\zippcrm.conf. - For HTTPS on Windows use win-acme (Let's Encrypt ACME client for IIS/Nginx on Windows).
- Service management:
nssm start ZippCRMBackend/nssm stop ZippCRMBackend/nssm restart ZippCRMBackend.
DATABASE_URLFull PostgreSQL connection string
ZIPPCRM_LICENSE_KEYEnterprise key — must match in both .env and docker-compose.yml
PORTAL_URLPublic URL customers use to reach the portal (sets CORS + email links)
ANTHROPIC_API_KEY / OPENAI_API_KEYAI Copilot providers (either or both)
SMTP_HOST / SMTP_USERNAME / SMTP_PASSWORDEmail notifications; can also be set post-boot in Admin → Settings
S3_ENDPOINT / S3_BUCKET / S3_ACCESS_KEY / S3_SECRET_KEYOptional object storage override for documents (multi-pod / Kubernetes)
WHATSAPP_BSP_URL / WHATSAPP_TOKEN / WHATSAPP_NUMBERWhatsApp BSP integration for client notifications
SCHEDULER_ENABLEDSet
true on exactly one replica only (Kubernetes / multi-instance)