Frequently Asked Questions
What is Bilby Quant?
Bilby Quant provides an alternative data stream designed to help quantify risk arising from political decisions in China. The quantitative signals are algorithmically distilled from Bilby's constantly-updated collection of official policy documents and delivered directly to you. This structured data ranges back to 2021, with new documents added daily.
Who will benefit from Bilby Quant Data, and how?
Anyone who needs a consistent, quantified signal capturing emerging policy risk. For example: Investors, government analysts and policymakers, or corporations with China exposure.
How do I get started?
- Click sign in to sign up for an account.
- We will contact you via the email address you used to sign up.
- Please respond indicating your preferred data subscription plan and delivery method.
- We will authorise your access and confirm via email.
- You will then be provided with access to our data portal.
How much does it cost to subscribe to Bilby Quant Data?
Please contact us at accounts@bilby.ai for pricing enquiries and to discuss how we can best deliver the data to you.
What does the data look like?
Please check out our datasets page for information regarding the data that is available.
Where does your data come from, and how is it enriched?
The raw data comes from a range of publicly available sources:
- Official government sources
- Newspapers
- Reports from Ministries
- Documents from State-Owned Enterprises
Bilby enriches this data with machine learning models that extract:
- Entity extraction: Identification of people, organisations, locations, currency mentions, events, initiatives, and other key entities mentioned in documents
- Policy lifecycle classification: Labelling documents according to their stage in the policy development process (informing, deciding, implementing)
Please see our Field Documentation for complete details.
How far does your data go?
The data goes back to the beginning of 2021, for some sources.
How frequently is the data updated?
Documents are scraped from sources several times per day. The data dump generation pipeline runs once daily, producing new parquet files that are immediately available through the portal.