Introduction
Hove Capital Limited develops financial software for the fintech, personal finance, and banking sectors. We recognise both the substantial advantages that artificial intelligence can offer to software development and financial services, and the heightened need for care, control, and transparency where personal data, client systems, and regulated activities are involved.
This policy is the public counterpart to our internal AI Usage Policy, which sets the operational rules our staff and contractors follow. Where this policy gives an overview, the internal policy provides the detailed procedures. Both are reviewed at least annually, and sooner where our AI tooling materially changes.
Scope
This policy applies to all use of AI for Hove Capital work, on any Company device, in conjunction with any software or service we provide, or otherwise on our behalf. We use the term "AI" broadly. It covers:
- Third-party hosted AI services such as commercial chat assistants and APIs
- Self-hosted AI software, including locally run open-weight models
- AI hardware operated by or for the Company
- Any retrieval, agentic, or tooling layer built around such AI, including Model Context Protocol (MCP) servers
Genuinely personal AI use, unconnected with Company work and involving no Company data, falls outside the scope of this policy.
Our Principles
The following principles guide every decision we take about AI:
- Responsibility: We are accountable for the AI we use and for the outputs we accept and apply. AI is a tool, not an authority.
- Lawfulness: Our AI use complies with applicable law, including data protection law, intellectual property law, and any sector-specific obligations we or our clients are subject to.
- Confidentiality: We use AI in a way that protects confidential and personal information, both our own and that of our clients and Data Subjects.
- Proportionality: We select and deploy AI proportionately to the task, with the minimum necessary data exposure for the result required.
- Verification: AI outputs, including code, text, and decisions, are reviewed by a human before being relied upon, deployed, or shared externally.
- Transparency: We are open with our clients and Data Subjects about our use of AI, including disclosing AI involvement in deliverables where appropriate.
- Continuous review: The AI landscape changes rapidly. We expect our approved tooling to evolve and treat every change as a controlled change.
AI Tools Register
We maintain an internal AI Tools Register that lists, at any given time, the AI tools, services, models, and hardware approved for Company use, together with the approved purposes, restrictions, deployment model, and any cross-border or data-sensitivity flags.
Tools and configurations not on the Register are not used for Company work. New entries and material changes to existing entries are approved by the Managing Director, and additionally assessed by the Data Protection Officer where personal data is involved. The Register is reviewed at every quarterly access review and any time a new AI tool, model, or significant hardware change is introduced.
Deployment Models
Different ways of running AI carry different risk and compliance characteristics. Each entry in our Register is mapped to one of the following deployment models, and the constraints below apply:
- Commercial cloud AI APIs: Third-party hosted AI services accessed over the internet. Used only under business-tier or higher terms of service that prohibit training on Company input. Cross-border data transfers are assessed under our Data Protection Policy.
- Rented GPU compute: Open-weight or proprietary models run on rented infrastructure where we control the workload but not the underlying hardware. No personal data, no client production data, and no Company secrets are processed on rented compute without prior assessment by our Data Protection Officer.
- Local Company-managed AI hardware: Hardware operated by us at a staff working location, running open-weight models for batch or research workloads only. Strictly out of scope for client production traffic and for any personal data of any real Data Subject. Used only with synthetic, public, or properly anonymised data.
- Local on-device inference: Lightweight models run directly on a Company device, used for assistive tasks such as autocomplete or summarisation. Inputs do not leave the device. Models are loaded only from reputable, attributable sources.
- Custom AI tooling and MCP servers: Internal tooling that connects an AI to data sources. Read-only access to non-production, anonymised, development-only data by default. Production data, production credentials, and personal data of any real Data Subject are not reachable through any such tooling without prior written approval and a Data Protection Impact Assessment.
Data Sensitivity
We classify data and apply corresponding rules on AI use. The summary below sets out what we permit, and what we never permit, at each tier:
- Public: Public documentation, our published policies, and public open-source code. Permitted with any approved tool.
- Internal (non-sensitive): Internal documents that do not contain personal data, client confidential information, or Company secrets, and non-sensitive code. Permitted with any approved tool.
- Confidential: Client business information not in the public domain, non-public commercial information, and non-personal-data analytics. Permitted only with commercial cloud AI APIs operating under business-tier terms that prohibit training on our input. Not processed on rented GPU compute or local Company-managed AI hardware without prior approval from our Data Protection Officer.
- Personal data: Any data identifying a Data Subject. Permitted only where we have a lawful basis under our Data Protection Policy, the AI tool's terms support the processing (including a Data Processing Agreement and appropriate safeguards for any cross-border transfer), and the Data Protection Officer has assessed and recorded the use. A Data Protection Impact Assessment is carried out where processing is likely to result in a high risk to Data Subjects.
- Production secrets and credentials: API keys, deployment keys, database credentials, environment files, certificate private keys, and OAuth tokens. Never submitted to any AI tool under any circumstances. Any inadvertent submission triggers immediate rotation of the affected secret and incident handling.
- Client production data: Live customer records, transaction data, and end-user content held in client production systems where we act as Data Processor. Never submitted to any AI tool except under explicit written instruction from the relevant client, supported by an appropriate Data Processing Agreement that authorises that use.
Where this policy refers to "anonymised" data, we mean data processed such that no living individual can be identified, directly or indirectly, by reasonably likely means. Pseudonymised data, where re-identification remains possible by reference to a key or to other data, is not anonymised and continues to be treated as personal data.
Verification of Outputs
AI-assisted code, documents, analyses, and other outputs are reviewed by a human before they are deployed, distributed, or relied upon. Information obtained directly from AI must have its authenticity and accuracy checked before it can be used in any deliverable. This covers:
- Generated code and code suggestions
- Technical documentation and specifications
- Research outputs and data analysis
- Security recommendations
- Regulatory or compliance guidance
Where AI is used to support a decision that has legal or similarly significant effects on a Data Subject, that decision is not based solely on the AI output, and the affected Data Subject's rights under Article 22 of the UK GDPR are respected.
Where AI is used to verify earlier work, for example an AI review of human or AI-generated code, the verification result is itself confirmed by a human before being acted on. Where AI assists with research or synthesis, cited or referenced facts are checked against authoritative sources before being relied upon in client-facing or regulator-facing work.
Bias and Accuracy
Outputs of AI tools may reflect biases or inaccuracies present in their training data. We treat this as a real risk and check outputs against reliable, up-to-date sources before further use.
Where AI is used in connection with regulated activities or financial decision support, we apply additional care to test for bias and to document mitigation steps. Where we identify a bias or inaccuracy that has already affected a client deliverable, we report it to our management, correct it, and inform the affected client where appropriate.
Intellectual Property
Where the relevant AI tool's terms permit, we own the intellectual property in outputs produced by our staff in the course of Company work. Where a tool's terms require attribution, we apply that attribution.
We remain alert to the risk that AI outputs may incorporate third-party intellectual property from training data without attribution. Where there is reasonable doubt as to the licensability of an output, the output is reworked or discarded.
Source code containing material commercial value, trade secrets, or third-party-licensed components subject to use restrictions is not submitted to AI tools other than those whose terms have been specifically assessed for that purpose.
Transparency with Clients and Data Subjects
Where AI has been used in producing a deliverable, we are transparent with the client about that use. Where AI is used in connection with personal data, the affected Data Subjects are informed in accordance with our Data Protection Policy and the relevant privacy notice.
Security
We apply the following security measures to our AI use:
- AI tools accessed via the internet are authenticated using our standard access controls, including multi-factor authentication where supported.
- Local Company-managed AI hardware is physically secured at the staff working location and recorded in our asset register.
- Network access to local Company-managed AI hardware is limited and is not exposed to the public internet without prior approval from our management.
- Models, weights, and any cached data on local AI hardware do not include personal data of any real Data Subject or client production data.
- Custom MCP servers and similar tooling apply least-privilege access in line with our Access Control Policy and log access for review.
Compliance & Standards
Hove Capital is committed to working in line with established standards for the responsible use of artificial intelligence, including the principles laid out in the BS EN ISO 42001 International Standard for Artificial Intelligence Management Systems.
We comply with all applicable legal and regulatory obligations concerning AI, including those issued by the Financial Conduct Authority (FCA) and the Information Commissioner's Office (ICO).
Policy Review
We review this policy and our underlying AI practices at least annually, and sooner where required. A review is also triggered by:
- Material changes to our AI Tools Register
- Any incident involving AI
- Significant developments in AI technology or in related regulation
- New AI tools or fresh use cases being proposed
Contact Us
Should you have any questions about this AI Policy or about the way we use artificial intelligence, please get in touch:
- Email: contact@hove.capital
- Data Protection Officer: dpo@hove.capital
- Telephone: 01273 937117
- Address: Curtis House, 34 Third Avenue, Hove, BN3 2PD
Hove Capital Limited is registered in England under company number 13049782.