Skip to content

IndiCam Service


The IndiCam service extracts measurements from images of mechanical oil gauges, using Machine Learning.

BETA NOTICE: This service is in Beta, and is best suited for early adopters with some technical skills, and patience. The Machine Learning model may need to be extended to deal with specific Oil Gauges, so expect a high error rate for the first several days of running the system. See Training the Machine Learning System for details.

To use the service, do the following:

  1. Create an account and get an API key -- see the App Site section.
  2. Set up a (low-cost) camera to capture an image of the oil tank gauge.
  3. Either configure the client on a Home Assistant device, or, write code to directly access the service via the API.

Privacy concerns

During the Beta phase, the Machine Learning is expected to make mistakes and fail, with failures diminishing over time. In order to facilitate communication during this learning ramp-up, uploaded gauge images are associated with user accounts.

To better fit with Home Assistant's goal of keeping data under the control of its owner, the following will be implemented after the beta is complete:

  1. When images are stored, store them without any reference to the account they originated from;
  2. Remove existing references between user accounts and images;
  3. Allow users to opt out of having their images stored, once their gauge is being processed correctly;
  4. And, implement ML measurement extraction on the Home Assistant system itself, or (perhaps) the camera device, so images need not be passed to the online service.

Capturing Gauge Images

A still or video camera must be used to capture an image of the oil tank gauge. The capture cycle for slow-changing oil tank gauges is an hour, so the video camera is essentially used as a still camera. The image must:

  1. Fit into the camera frame, including its top and base.
  2. Ideally occupy more than 75% of the image height.
  3. Be oriented right way up in the image, close to vertical -- i.e. avoid rotating the image with respect to the gauge.

In order to minimize spatial distortion, the camera must: 1. Point directly at the center of the gauge, so the top and bottom are equidistant from the camera lens. 2. Face the gauge squarely, i.e. the lens must be parallel to the gauge.

This is an example of an image conforming to the specification, taken with an ESP32-CAM using the LED flash in a poorly lit basement:

Oil Tank Gauge

Service Clients

Home Assistant Custom Component


A Home Assistant Custom Component exists to integrate the IndiCam service with a Home Assistant installation. Currently, the component is available via a custom HACS repository, while the process for inclusion into the core is in process.

This requires the installation of HACS, and, adding the custom repository.

Camera entities

Assuming an ESPHome-based camera as described above, after the camera has been detected and the device integration added by the user, the following entities will be created:

  • camera.indicator_camera -- The camera entity ID
  • switch.camera_flash -- The flash LED entity ID

Note that using a different camera, or a different lighting solution means the entity ID's should be replaced in the configuration below.

Install and Configure the Indicam Platform

BETA NOTICE: The configuration syntax allows for multiple cameras to view multiple oil gauges. However, the component and the service only allows for one camera at this time.

Add the following sections to your Home Assistant's configuration.yaml. First, in order for the Home Assistant system to save the captured image, and, the decorated image that shows the measurement results, allow reading and writing files to a selected location on disk (/tmp in this case.)

  # Give access to the directory image files will be written to. This is optional if the "file_out" item
  # in the next section is not used.
    - /tmp

Then, add the configuration for the IndiCam image processing platform:

  - platform: indicam
    # Get this by registering at
    auth_key: !secret hausnet_indicam_key

      # The name of the indicam has to be the same as the device name at the service 
      - name: my_indicam_name

        # The input camera's ID 
        camera_entity_id: camera.my_camera

        # Where to store image files (optional, files will not be saved if left out)
        path_out: "/tmp/indicam"

        # Max and min fill lines positions, relative to the top and bottom of the gauge, respectively. Expressed
        # as fractions of the body height, or (percentage of body height / 100). Optional.
        maximum: 0.07
        minimum: 0.07

        # Switch used to turn flash / lighting on and off. Optional.
        flash_entity_id: switch.my_flash

And the secrets.yaml file should contain the API key obtained from the IndiCam App:

hausnet_indicam_key: 1234abcdef5678...

Processed Image Display

Both the source image and a version of it that is decorated with the elements detected by the image processing algorithm can be displayed in the Home Assistant UI.

The images are saved at processing time in the directory specified in the allowlist_external_dirs option (see above). Assuming /tmp as the root destination directory, two files are created:

  • /tmp/indicam/[indicam-name]-snapshot.jpg: The snapshot taken for processing.
  • /tmp/indicam/[indicam-name]-measure.jpg: A version of the snapshot decorated with measurement elements.

The images can be displayed with the help of a Local File camera, using the paths above as the image sources. Picture Entity cards can be created from the camera entities and placed on a dashboard.

This is an example of a processed (or "decorated") image. Note that:

  1. The gauge body is outlined in yellow.
  2. To the left of the body there is a scale evenly dividing the body into ten units. This is useful for calibration of the minimum and maximum level positions (see Calibration).
  3. The minimum and maximum levels are marked in red.
  4. The top of the float is marked in red.

Decorated Oil Tank Gauge

IndiCam Client Library

A client library is available to ease the use of the service. It can be installed with:

pip install indicam_client

To instantiate an instance of the client library, pass an aiohttp client session, the service URL, and, an API key obtained from the Apps Website:

import asyncio
import aiohttp
import indicam_client

API_KEY = "my api key..."
SVC_URL = ""

async def main():
    """Get a session, then create the client. Test the connection after."""

    session = await aiohttp.ClientSession()
    api_client = indicam_client.IndiCamServiceClient(session, SVC_URL, API_KEY)

    connect_status = await api_client.test_connect()
    if connect_status = indicam_client.CONNECT_OK:
        # Connection made
    elif connect_status = indicam_client.CONNECT_AUTH_FAIL:
        # Authentication failed -- probably due to the wrong API key.
    elif connect_status = indicam_client.CONNECT_FAIL:
        # Connection failed, retry is advised

Now, given a JPEG image of the gauge, it can be uploaded to the service. After the upload, the client system should wait for results to be extracted:


async def process_image(image: bytes) -> None:
    """Upload a given image. Return 'None' if the upload failed. Then, wait for the result to be ready. """

    image_id = api_client.upload_image(DEVICE_NAME, image)
    if not image_id:
        # Image was not uploaded
        return None
        if not await api_client.measurement_ready(image_id):
            await asyncio.sleep(delay)
        measurement = await api_client.get_measurement(image_id)
        if not measurement:
        return measurement
    return None

The measurement returned is a GaugeMeasurement, and the values there can be used to decorate the input image with the edges of the items detected.

class GaugeMeasurement:
    # Position of the left side of the detected gauge body, in pixels from the left side of the image.
    body_left: int
    # Position of the right side of the detected gauge body, in pixels from the left side of the image.
    body_right: int
    # Position of the top of the detected gauge body, in pixels from the top of the image.
    body_top: int
    # Position of the bottom of the detected gauge body, in pixels from the top of the image.
    body_bottom: int
    # The position of the top of the float, in pixels from the top of the image.
    float_top: int
    # The % full value, between 0% and 100% (or 0 to 1). Takes the camera config (calibration) into account.
    value: float

On the very first connection, the virtual device at the service has a default camera configuration (i.e. calibration) such that the offset of the full and empty gauge marks from the top and the bottom of the gauge bell is assumed to be zero. The offsets can be calibrated to agree with the actual position of the minimum and maximum marks:

async def update_camconfig(new_config: CamConfig) -> None:
    """Update the IndiCam service with offset calculated to agree with actual min/max marks. 

    The offsets are provided as factors of the body height from the top of the gauge or the bottom. First,
    get the numeric indicam ID, using its name, then check the service configuration values -- if they 
    agree with the local values, don't update.

    indicam_id = await api_client.get_indicam_id(DEVICE_NAME)
    svc_cfg = await api_client.get_camconfig(indicam_id)
    if not svc_cfg:
        # Handle no service config returned...

    if svc_cfg == new_config:
        # No need to update

    cfg_created = await api_client.create_camconfig(indicam_id, new_config)
    if not cfg_created:
        # Deal with error

The CamConfig is defined in the indicam_client module as:

class CamConfig:
    """ Holds the camera configuration. """
    min_perc: float
    max_perc: float


Note that the Beta is focusing on the Home Assistant client. More documentation with examples for the API will be provided once that client is working smoothly. That said, let us know if you're trying to use the API and need some guidance.

The API is documented with Swagger. The API documentation can be viewed here.

Authentication is required, and is done via an API key in the request headers. The API key is available via the App Site, and needs to be set in the request header like this:

    Authorization: Token 9944b091a9c62bcf9418ad846dd3e4bbdfc6ee4b

In addition to the token, the API requires a device to be defined at the service. This device represents your home automation system at the service, and its name is needed to generate a heartbeat for the actual device.

Training the Machine Learning System

As the Machine Learning (ML) encounters new gauges, they may be sufficiently different from the gauges it has been trained on up to that time. This means the first several days worth of images may fail

There is a system in place where, if an image measurement extraction fails, a human will examine a failed measurement image, and extend the training of the ML system. Within a short period of time, the system should start recognizing the new gauge, and errors will become fewer until they're rare.

Measurement Calibration

The machine learning algorithm detects the body outline of the oil gauge. The float minimum and maximum marks on the gauge body are the real fill minimum and maximum, and respectively lie above and below the gauge body bottom and top.

By default, the gauge body is used as the minimum and maximum reference when calculating the fill level as a percentage. To make the measurements more accurate, the locations of the minimum and maximum fill level marks can be defined relative to the body.

The fill line minimum and maximum can be defined in the configuration. with the maximum and minimum settings. Both settings are defined as a factor, or a percentage, of the body height. And the length obtained by multiplying the gauge body height with the setting value gives the distance of the setting value from the body bottom or top.

For example, using settings of 0.05 (5%) for minimum and 0.1 (10%) for maximum, and a body height of 186 pixels, and assuming the image pixel row numbers start at zero from the image top and increase towards the bottom:

# Minimum line offset in pixels from bottom
pixel_offset_min = round(186 * 0.05)            # = 9
# Maximum line offset in pixels from top
pixel_offset_max = round(186 * 0.1)             # = 19
# Line locations in pixels from top of image, assuming body top is at 132 pixels from top of image
pixel_loc_min = (132 + 186) - pixel_offset_min  # = 309
pixel_loc_max = 132 + pixel_offset_max          # = 151

# And how the measurement is calculated, assuming the float top is at 80% of the gauge body, at pixel row
# 132 + round(186*(1-0.8)) = 169.The measurement is in percentage / factor of the distance between the minimum 
# and the maximum.
measurement = abs(169 - pixel_loc_min) / abs(pixel_loc_max - pixel_loc_min) # = 0.88  

So in this example, the calibrated measurement is 88%, compared to 80% for the uncalibrated.

When displaying the decorated image in Home Assistant, a scale is displayed to the left of the gauge body, dividing the body into ten parts. The scale can be used to estimate the offset percentages visually. Also, the positions of the current minimum and maximum are displayed by red lines, and these lines, together with the scale and the current setting values, can be used to fine-tune the settings.

When changing the settings, a restart of Home Assistant is needed for the new settings to take effect.

Using an ESP32-CAM

Any video camera should be suitable for capture. Here we provide implementation notes for the ESP32-CAM, since it is popular, affordable, flexible, and, easily integrated with the Home Assistant + ESPHome platforms.

The ESP32-CAM module contains a wi-fi enabled ESP32 processor and an OV2640 camera chip, for under $10. Combined with a development carrier board (ESP32-CAM-MB) that contains a power supply, serial communications, and reset / programming buttons, it makes an effective and very affordable camera for this application.

The first step to use this camera is to program it with the ESPHome firmware. The ESPHome website contains instructions for programming the firmware. Here is the configuration for a bare-bones camera:

  name: indicam
  comment: Indicator camera on ESP32CAM.
    # Looks like this is needed for build outside a Home Assistant environment.
      - "-DCONFIG_OV2640_SUPPORT"
      #- "-DCONFIG_SCCB_CLK_FREQ=100000"

  board: esp32cam
    type: esp-idf
    version: recommended

# Enable logging
  level: DEBUG

# Enable Home Assistant API
  password: ""

  password: ""

  ssid: !secret wifi_ssid
  password: !secret wifi_password

  - platform: gpio
    pin: GPIO4
    name: "Camera Flash"

  sda: 26
  scl: 27
  scan: false
  frequency: 100kHz

# For the AI-Thinker board, using this as a guide: 
  name: Indicator Camera
    pin: GPIO0
    frequency: 10MHz
    sda: GPIO26
    scl: GPIO27
  data_pins: [GPIO5, GPIO18, GPIO19, GPIO21, GPIO36, GPIO39, GPIO34, GPIO35]
  vsync_pin: GPIO25
  href_pin: GPIO23
  pixel_clock_pin: GPIO22
  power_down_pin: GPIO32
  # Image settings
  #max_framerate: 1 fps
  idle_framerate: 0 fps
  resolution: 1600x1200
  jpeg_quality: 10


  • The idle_framerate is set to 0, meaning no frames will be captured until a request is made from the Home Assistant controller.
  • The use of the LED Flash on the ESP32-CAM is optional, and it can be left off if adequate ambient light exists, or, if another solution is used for lighting.

Configuration for ESP32-based boards other than the ESP32-CAM can be found in the ESPHome ESP32 Camera Documentation

On-boarding and Machine Learning

During the beta phase, the Machine Learning system is being taught to extract measurements from all the types of gauges that can be encountered in the wild. All the gauges must have a traditional bell + float construction, but their relative sizes, markings, and lighting conditions may require the model to be adapted.

What this means for beta users is that, during the period following set-up, there may be situations where a measurement fails, and the ML model needs to be adapted to make it succeed. To the Home Assistant setup