Dejan Levec

Monitoring electricity consumption

I was always interest in knowing how much energy am I using and have been experimenting with different approaches to measure consumption. My favorite way to measure electricity consumption is the use of current transformers/clamps around wires coming into the house. Sadly, I don’t have any space left in electrical box to p ut them in.

About a year ago I found a solution, which is to direct web camera to electricity meter, watch blinking LED and calculate consumption. I made a simple prototype in the form of gluing webcam to outside electrical box (where we have electricity meter) and simple Forms application made in C# with bindings to OpenCV library.

The problem was that this prototype consumed almost whole core on older dual core AMD Turion laptop, which consumes more than 40 Wh. That is not a cheap solution and I also need that laptop for other purposes.

After a week or two of testing, I finally removed laptop and kinda forget about everything this idea. Later, I tried two different ways to capture blinking LED but neither worked as it should.

Two days ago I remembered about it and wanted to port everything to Raspberry PI, and in the future maybe even to a small router.

Over the last two days I tried two different approaches:

1.) Capture frames using v4l2 (in my opinion this was the best idea, and it should be as much as possible lightweight)

Capturing frames with v4l2 seemed like a hard thing to do, and it would actually be really hard, should I not use code from mjpeg-streamer.  I went through the source of mjpeg-streamer, quickly found most interesting module – input_uvc and searched for usable source code.

After I successfully got frames, I needed to process them in figure out when the LED blinked. For debugging purposes I also added code to measure performance (frames per second) and to save captured frames.

Project is available at:

Application runs in background (it’s a console application), watches captured frames for change of color intensity in specified area of interest, and makes http request to server to save results.

How it works?

  • Main function gets raw frames in YUYV format from web cam.
  • Function do_my_thing uses two for loops to iterate over received frame and get the average intensity of red color in specified area of interest.
  • If current intensity level is larger than average intensity level of previous 10 frames + 10 levels (to discard noise) then we mark it as blink of an LED.
  • We wait for a frame that has intensity level lower than the one marked as blink, and then make a http request to web server. (Single blink can be captured on multiple frames)
  • Simple PHP script running on web server saves data to database.

I should also mention my genius idea of how to keep intensity levels of last 10 frames – circular buffer. When it gets full it replaces oldest position with a new intensity level – that way I always have last 10 intensity levels and don’t need to move any data around.

I need to test this solution in the following week to see if there are any problems with capturing LED, but only improvement I could see right now is to add threads. One thread  for frame capturing and processing, and another for sending http requests. Currently http request blocks capturing new frames and this may introduce a slight delay or lower fps.

Here is a picture of my current “GUI”:

MySQL Workbench running a query which counts reported blinks in the last minute. Now I need to make a web app that will show statistics and current consumption of electricity.

2.) Capturing and processing frames done with OpenCV library (should be the easiest)

I also tried this approach, and it really reduces the number of lines of code written, but Raspberry PI captures about 1 frame per second via OpenCV library, and about 15-20 with v4l2 API.

Leave a Reply

Your email address will not be published. Required fields are marked *