Before we ask a winery or distillery to trust us with their harvest logs, production data, customer lists, or proprietary recipes, we owe them a clear answer to one question: what are you going to do with our data? This page is that answer — in plain language, in writing, before any engagement begins.
Last updated April 2026 · v1.0
These aren't marketing commitments. Every principle on this page is written into our standard engagement terms. If any of them won't work for your operation, we'll negotiate them — but we won't wave them away.
We publish this page because the wine and spirits industry is cautious about technology for good reasons. Your mash bill, vineyard block data, club member list, and production playbook are competitive assets. An AI consultancy that can't articulate how it handles those assets has no business near them.
Every piece of data you share with us — harvest records, fermentation logs, customer files, recipes, sales history — remains your property. We take no license to use, resell, or repurpose it beyond the scope of the work you've hired us to do.
At the end of the engagement, your data is returned or destroyed per your instruction, and we'll confirm in writing.
We do not train machine learning models on your data. We do not fine-tune general models on your data. We do not use your data to improve our internal tooling or benchmarks.
If a project calls for a custom fine-tuned model, that's scoped and contracted separately, the resulting model belongs to you, and no other client sees it.
When we call large language models on your behalf, we use the enterprise tiers of Anthropic Claude, OpenAI, and AWS Bedrock — all of which contractually guarantee that your prompts and responses are not retained, not logged for training, and not visible to model providers' staff beyond narrow abuse-monitoring requirements.
We will share the relevant provider Data Processing Addenda with you on request.
Before our first working session involving any real operational data, we sign a mutual NDA with a three-year survival period. We have a standard template ready, and we're equally happy to sign yours.
The NDA covers both directions — our proprietary methods are also confidential to you — and it survives termination of the engagement.
During discovery, we work from samples, redactions, and anonymized extracts wherever the analysis allows. We don't ask for a full customer export when a 10,000-row sample answers the question.
Full-volume data access happens only inside an active, scoped engagement, and only for the team members who need it for the work.
A "subprocessor" is any third-party vendor that might touch your data on our behalf — cloud infrastructure, AI APIs, collaboration tools. Our current list is published below and kept current.
Before adding a new subprocessor that will touch your data, we'll notify you with at least 30 days' notice. You have the right to object, and if we can't accommodate the objection, you have the right to terminate without penalty.
Data at rest is encrypted with AES-256. Data in transit is TLS 1.2 or higher. Access is role-based, MFA-enforced, and logged. Client data is logically segregated — your data never sits in a shared bucket with another client's data.
Every team member with access to client systems signs a confidentiality agreement and completes security onboarding before being granted credentials.
Your data is stored and processed in US infrastructure — AWS us-east-1 and us-west-2 as primary regions. Data does not leave US jurisdiction without explicit written approval from you.
If anything happens involving your data — a suspected breach, an access misstep, a vendor issue — you hear from us within 72 hours of our becoming aware, and we aim for 24.
You'll receive a written incident report, a full postmortem within 30 days, and whatever remediation the situation calls for. We will not bury or obscure incidents. The reputational cost of transparency is lower than the reputational cost of being found out.
Within 30 days of engagement completion, we delete all copies of your operational data from our systems and subprocessors, and send written confirmation.
Exceptions: (a) anonymized artifacts used in a case study you've approved in writing, and (b) records we're legally required to retain (contracts, invoices, tax records) — which remain under the same confidentiality terms.
Whether it's a SOC 2 vendor questionnaire, a custom IT checklist, or a formal procurement review — we'll fill it out, honestly and in detail. If a question surfaces something we don't do yet, we'll tell you that rather than finesse it.
We are working toward SOC 2 Type II certification; our target is completion within the first 18 months of operation.
We will not share your data with another client, even anonymized, without your explicit written permission. We will not use insights from your operation to pitch your direct competitors on strategies you paid us to develop. We will not retain identifiable data past the engagement for "future reference." We will not publish case studies with your name, logo, or identifying details without your written sign-off on the specific content.
Vendors that may process your data on our behalf. This list is updated as our stack evolves.
| Vendor | Purpose | Data Processed | Location |
|---|---|---|---|
| Anthropic | LLM API (Claude) | Prompts, responses; zero retention tier | US |
| OpenAI | LLM API (GPT) | Prompts, responses; enterprise zero-retention | US |
| AWS | Cloud infrastructure, storage, Bedrock | All client data at rest and in processing | us-east-1, us-west-2 |
| GitHub | Source code, infrastructure-as-code | Code only; no client operational data | US |
| HubSpot | CRM for our own sales pipeline | Your name, email, company — no operational data | US |
| Linear | Internal project tracking | Engagement metadata; no raw client data | US |
| Google Workspace | Email, documents, video calls | Correspondence and meeting content | US |
| 1Password | Credential management | Client-issued credentials, encrypted at rest | US |
Any of these can be invoked by a written request to security@vinumai.com. We commit to substantive response within 10 business days.
You can request documentation of our security posture, a summary of where your data currently lives, and — for enterprise engagements — a reasonable in-person or virtual audit session.
You can request deletion of your data at any time during or after an engagement. We'll comply within 30 days and confirm in writing, subject only to the legal retention exceptions above.
You can request a complete export of your data, including any models, scripts, dashboards, or derived artifacts produced during the engagement. Deliverables are yours.
If we propose a new subprocessor that will touch your data, you have 30 days to object. If we can't accommodate the objection, you can terminate the affected engagement without penalty.
Security, legal, and procurement teams are welcome. Nothing on this page is meant to be read between the lines — if something's unclear, ask.
security@vinumai.comWe respond to security inquiries within one business day.