||1 year ago|
|k8s||1 year ago|
|svm_prefilter_service||1 year ago|
|.dockerignore||1 year ago|
|.drone.yml||1 year ago|
|.gitignore||1 year ago|
|Dockerfile||1 year ago|
|README.md||1 year ago|
|requirements-dev.txt||1 year ago|
|requirements.txt||1 year ago|
|uwsgi.ini||1 year ago|
SVM Prefilter Service
The purpose of this service is to bring the SVM Ai from the Birbox to the Edge.
This service has the same interface as the input service. Upon receiving a request it enqueues it, runst the Ai prediction, if a bird chirp sound is predicted then this service forwards the request to the original input service in the cloud.
This service needs to communicate with the input service and the model service. And optionally it can report its queue length to birb-latency-collector.
This service can be configured trough envvars:
- SECRET_KEY: Flask secret key. See https://flask.palletsprojects.com/en/2.0.x/config/#SECRET_KEY
- SENTRY_DSN: Sentry DSN. See https://docs.sentry.io/product/sentry-basics/dsn-explainer/
- RELEASE_ID: Release ID for Sentry.
- RELEASEMODE: Release mode for sentry.
- MODEL_INFO_URL: URL of the model service endpoint when the information about the used SVM model can be fetched (e.g.:
- INPUT_SERVICE_URL: URL of the real input service (e.g.:
- REPORT_URL: URL of the birb-latency-collector service to report queue length to (e.g.:
- REPORT_INTERVAL: Seconds between reporting queue length to birb-latency-collector.