BVP Tech Charter

Purpose

BVP builds technology to expand human capability, community wealth, and access to opportunity. Our products and operations use AI, automation, and data responsibly, with safeguards that protect dignity, privacy, and fairness. This charter sets non-negotiable principles that govern how BVP designs, deploys, and operates technology across ChatBVP, BVP Coffee Co., and future platforms, devices, and operating systems.

Scope

This charter applies to all BVP-built systems and workflows, including AI agents, recommendation systems, customer support automation, analytics, personalization, robotics and edge AI, financial coaching tools, and any third-party services we integrate.


Non-negotiables

  1. Data minimization

    We collect and retain the minimum data required to deliver the specific service a user requested.

  • We do not collect “just in case” data.

  • We use aggregation and de-identification where possible.

  • We apply strict retention schedules and delete data when it is no longer needed.

  • We prefer on-device or local processing when feasible for sensitive use cases.

  1. Informed consent and user control
    Users must understand what is happening and have meaningful choices.

  • We clearly disclose what data is collected, why, and how it is used.

  • Consent is specific, revocable, and not bundled into unrelated permissions.

  • Users can access, export, correct, and delete their data within reasonable operational limits.

  • We provide settings that let users opt out of personalization and non-essential tracking.

  1. No covert surveillance
    We do not secretly monitor people.

  • No hidden microphones, cameras, keystroke logging, or undisclosed location tracking.

  • No background collection of sensitive signals unrelated to the user’s requested service.

  • No “shadow profiles” or data buying for covert targeting.

  • Any workplace or facility monitoring must be disclosed, narrowly scoped, and safety-justified.

  1. No discriminatory profiling
    We do not build or deploy systems that discriminate, exclude, or exploit protected or vulnerable groups.

  • We do not use protected characteristics (or proxies) to limit access, raise prices, reduce service quality, or manipulate outcomes.

  • We test for disparate impact and address it before launch and during operation.

  • We ensure training data, evaluation, and monitoring reflect real user diversity.

  • We do not deploy high-stakes automated decisions without rigorous review and accountability.

  1. Clear human escalation
    Humans remain accountable for outcomes, especially when money, access, or reputation is at stake.

  • Users can reach a human for support within published service levels.

  • Automated actions that materially affect a user (refund denial, account restrictions, pricing disputes, financial guidance flags, content moderation) must have a human escalation path.

  • We maintain audit logs for key decisions and interventions.

  • We designate accountable owners for every AI system in production.

  1. Right to contest automated outcomes
    Users can challenge and correct harmful or incorrect system decisions.

  • We provide a straightforward appeal process.

  • We offer a clear explanation of the basis for the decision in plain language, to the extent feasible without compromising security or privacy.

  • We review contested outcomes promptly and document resolutions.

  • We continuously improve models and rules based on verified errors and harms.


Operational commitments

Security by design

We implement security controls proportional to risk, including encryption, access controls, monitoring, and incident response. We limit internal access to sensitive data to those who need it for a defined purpose.

Transparency by default

We disclose when users are interacting with AI systems, what the system is optimizing for, and any material limitations. We do not present AI as human.

Safety and reliability standards

We evaluate AI systems for accuracy, robustness, and harm. We implement guardrails for misuse and failure modes, and we pause or roll back systems that fail our standards.

Vendor and partner alignment

Third-party tools and platforms must meet these principles. Where vendor defaults conflict with this charter, we configure them to comply or do not use them for that purpose.


Governance and accountability

BVP Technology Stewardship

BVP maintains a cross-functional stewardship group that includes product, engineering, operations, legal/compliance as needed, and community or user representation when appropriate. This group:

  • Reviews high-impact systems before launch

  • Sets risk tiers and required controls

  • Audits compliance and publishes internal scorecards

  • Owns incident response and corrective action plans


Enforcement

Violations of these non-negotiables trigger immediate review, mitigation, and if necessary, shutdown of the offending system. Accountability is assigned to a named internal owner. Repeated violations prompt leadership review and structural changes.


Effective date and updates

This charter is effective immediately upon publication. Updates require review by BVP Technology Stewardship and leadership approval, and will be versioned with change notes.


In Truth & Service,
Bison Venture Partners
Technology Stewardship and Leadership


Invest on Wefunder