|Software apps and online services:|
|Hand tools and fabrication machines:|
This hack uses a Raspberry Pi as a hub device. I have this installed in my corridor.
The idea of Hub device is to be able to design a model where you can easily plug-n-play new edge devices and be able to program using an event model.
When a new device comes in, it waits for Hub device to start pairing. Hub device assigns a Device-ID and queries for identity. Known device identities have their capabilities stored in cloud, otherwise, device responds to Hub capability query request with its capabilities. The edge device is then put to suspend state.
User interface (web/app), shows any new device in the network. User can chose to enable/block any new device. If user chose to enable, he/she can then build a use-case by easy drag-drop interface to chain capabilities.
For eg: A PIR capability is to trigger an event on motion, a Camera device can subscribe to it. Thus, a camera only triggers when a PIR reports motion. Camera can in-turn trigger a light meter who generates a low-light event which is subscribed by light to turn on. Thus, we can build complex chain of events to tie all edge-devices together through the Hub and none of the edge device needs to reach to Internet.
- When someone walks up, my hub device gets to know about it through the PIR motion sensor. It triggers the USB camera to captures a picture and uploads to the Azure cloud storage.
- The image is sent to Azure Cognitive Service to identify people in it. If I find the person with confidence, I save that information else I mark that image for review later.
#Interrupt Handler def MOTION(sensor): backup() beep() GPIO.output(irLight, GPIO.HIGH); clip() GPIO.output(irLight, GPIO.LOW); post_tv() post_cloud()
- Using Azure mobile notification hub, I send a notification to my mobile with a thumbnail of the image taken. I also ping on GTalk with the link of picture taken (I am unable to embed image thumbnail in the message yet), this enabled a backup option to see who is showing up at the door, even when I am not around.
- Through the app, I can review the images where identity of the person could not be established and I can either add it to an already added user group or create a new user group. (code for Mobile app is not shared). This helps me get better with identifying people I know.
I am trying to add social media integration, to be able to scan through my friends list and add all of them to my database, this way I'll be able to add more context to the results returned.
- Pi is also connected to my TV with an HDMI cable, it puts the picture on TV screen too! and then beeps softly to let people in my house know of someone approaching the door. I am exploring options to integrate it using Microsoft Display Adapter (miracast), for now this is HDMI connected display.
I integrated with my TV remote, so people in my family can see the door on TV with the same remote they would be switching channels (easy, right?). This action will not upload picture to Cloud and no mobile notifications. The event generated by IR receiver is acted upon by Camera and the output is redirected to TV instead of default Cloud path.
from subprocess import call call(["fswebcam", "-r 1280x768", "-s Gamma=50%", "/home/pi/image.jpg"]) call(["pkill", "fbi"]) call(["/home/pi/ttyecho", "-n", "/dev/tty1", "fbi -a -noverbose -d /dev/fb0 /home/pi/image.jpg"])
- I can make a callback to my door from the same phone app using Skype. The call lands to a cheap android phone capable of running Skype. I chose Android because it lets me auto-answer the calls. The only user added to this skype on my door is me! Now I can have an interactive conversation with someone standing at my door.
- Integrating an auto door-unlock feature is trivial once I have identified a person, but that could be a security risk too, so I have not enabled it other than trying with a PoC.
- I integrated home-health watch feature. These are small low-power compound edge-devices I built with Digispark Pro and HM-10 (BTLE). I published some projects around how to make these work separately on Hackster.io/achidra (Some are still private). The purpose of an these devices is limited, a bathroom monitor keeps track of gas-leakage and motion controlled lights and report the usage and status to Hub device.
- One of the edge device uses visual notifications, tied to various events, through lights mounted on my desk. Like door-bell, just in-case I have headphones on. It also watch for motion in my room at night and slowly turns on light, when someone wakes up, to assist.
By my calculations, battery is going to last about six months since I am using low power states a lot, however I am happy with 2-3 months for now and working further to finetune the circuits to be able to last a year on battery!
- Next big thing in MannyBot is to be able to talk to it. I am working on integrating Cortana Speech recognition to be able to control TV with voice. I have data in Azure cloud storage about my Set Top Box and TV remote codes and channel mappings.
- I am announcing TV channel name which is converted to text by Bing Speech to Text service, send to cloud to fetch channel number and corresponding IR codes. This is then send to another edge device which is a connected remote. Thus, I can now announce to Cortana to switch to Star Sports 2 HD!
A little intelligence will help here, if I can ask Cortana to watch for rains to stop and remind me when my favorite game is about to start and in the meantime I can watch a movie. However for now, my alarms are announced at home so I don't miss a meeting invite.
EDIT: I also have my fish tank story added to this project. I will publish details on how to build that in a separate post. I built a custom pH monitor and water level indicator which helps me change % of water and a Fish food feeder. When food supply is low, I let MannyBot know and it sets an alarm on my phone. The Alarm is handled by my app which uses Geo-Fencing and only triggers when I am around the supply shop, however when I am down to critical value, the alarm geo-fencing limit is increased to 100 km so as to ensure I go to the supply shop. Fish tank details on how pH value is changing and how frequently I have been changing water is available through the app. The water out of fish-tank goes to my plants. I will share more details on what my plan with that in my next experiment soon.
#Fish food feeder def ffFeeder_thread(): if os.path.isfile("/home/pi/feedtime.log") == False: fh = open("/home/pi/feedtime.log", "w") fh.write(datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')) fh.close() Timer(5, ffScheduleJob, ()).start() def ffScheduleJob(): #trimmed for brevity if time_diff > timedelta(seconds=10): ffFeeder() ffUpdateCloud() fh = open("/home/pi/feedtime.log", "w") fh.write(datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')) fh.close() #Servo to drop food def ffFeeder(): for x in range(0,5): food.ChangeDutyCycle(5) time.sleep(1) print "-" food.ChangeDutyCycle(7.5) time.sleep(2) print "/"
A few more use-cases to integrate through BT-LE and a complete rewrite of the app to cover for new module integration. It is fun to see how this small hub device is growing...
Since this has evolved in my mind and in my home setup, I am unable to share a comprehensive list or step-by-step details on how to build. I expect interested people to have comfort building IoT devices and Azure applications. I am also documenting my experiments as small projects on Hackster that you may refer.
- Setup your PC with Visual Studio 2015, Arduino IDE and Core project templates/SDKs
- Setup your IoT components, USB web camera, etc.
- Setup Azure account keys
- Raspberry Pi with Python in my case, you can have Windows IoT core
- I have built it with UDOO Quad and Neo models too
Simple Edge Device
- Edge devices can be built with Arduino or Digispark
- Connectivity can be IR, BT or Wifi depending on your device requirement and capability to program them
Compound Edge Devices
- I built compound edge devices with Digispark and HM-10 BT LE modules for energy efficiency.
#!/usr/bin/env python from azure.storage.blob import BlobService from datetime import datetime, timedelta from threading import Timer from subprocess import call from time import strftime import RPi.GPIO as GPIO import time import uuid import sys import xmpp import os.path sensor = 4 servo = 17 beeper = 18 irLight = 24 #MOTION_DELAY = 60 GPIO.setwarnings(False) GPIO.setmode(GPIO.BCM) GPIO.setup(sensor, GPIO.IN, GPIO.PUD_DOWN) GPIO.setup(servo, GPIO.OUT) GPIO.setup(beeper, GPIO.OUT, GPIO.PUD_DOWN) GPIO.setup(irLight, GPIO.OUT, GPIO.PUD_DOWN) pwm = GPIO.PWM(beeper, 1000) food = GPIO.PWM(servo, 50) #turned_on = False #last_motion_time = time.time() blob_service = BlobService(account_name='AZURE_ACCOUNT_NAME', account_key='AZURE_ACCOUNT_KEY') blob_service.create_container('achindra') blob_service.set_container_acl('achindra', x_ms_blob_public_access='blob') offline = False #Fish food feeder def feeder_thread(): if os.path.isfile("/home/pi/feedtime.log") == False: fh = open("/home/pi/feedtime.log", "w") fh.write(datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')) fh.close() Timer(5, scheduleJob, ()).start() def scheduleJob(): print "feed job" Timer(5, scheduleJob, ()).start() fh = open("/home/pi/feedtime.log", "r") dt = fh.read() print dt #t = datetime.strptime(dt, '%Y-%m-%d %H:%M:%S') fh.close() time_diff = datetime.utcnow() - datetime.utcfromtimestamp(float(dt)) print time_diff if time_diff > timedelta(seconds=10): feeder() fh = open("/home/pi/feedtime.log", "w") fh.write(datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')) fh.close() def feeder(): for x in range(0,5): food.ChangeDutyCycle(5) time.sleep(1) print "-" food.ChangeDutyCycle(7.5) time.sleep(2) print "/" #Interrupt Handler def MOTION(sensor): global jabber global offline #last_motion_time = time.time() #if time.time() > (last_motion_time + MOTION_DELAY): backup() beep() GPIO.output(irLight, GPIO.HIGH); clip() GPIO.output(irLight, GPIO.LOW); post_tv() post_cloud() def clip(): call(["fswebcam", "-S 4", "-r 1280x768", "-s Gamma=50%", "/home/pi/image.jpg"]) def post_tv(): call(["pkill", "fbi"]) call(["/home/pi/ttyecho", "-n", "/dev/tty1", "fbi -a -noverbose -d /dev/fb0 /home/pi/image.jpg"]) def beep(): pwm.start(100) time.sleep(0.5) pwm.stop() def backup(): call(["cp", "-f", "/home/pi/image.jpg", "/home/pi/preCam.jpg"]) def post_cloud(): blobName = str( uuid.uuid4() ) blob_service.put_block_blob_from_path('achindra', 'DoorCam', '/home/pi/image.jpg', x_ms_blob_content_type='image/png') blob_service.put_block_blob_from_path('achindra', blobName, '/home/pi/image.jpg', x_ms_blob_content_type='image/png') #try: # if offline == True : # jabber.auth('GMAIL_LOGIN','GMAIL_PASSWORD', 'DoorBot') # offline = False # else: message = xmpp.Message('email@example.com', 'http://pisecure.blob.core.windows.net/achindra/' + blobName + ' ' + strftime("%d/%m/%Y %H:%M") ) message.setAttr('type', 'chat') jabber.send(message) #except: # offline = True #Start time.sleep(2) #feeder_thread() try: global jabber jabber = xmpp.Client('gmail.com') jabber.connect(server=('talk.google.com', 5222)) jabber.auth('GMAIL_LOGIN', 'GMAIL_PASSWORD', 'DoorBot') offline = False GPIO.add_event_detect(sensor, GPIO.RISING, callback=MOTION, bouncetime=10000) while 1: time.sleep(100) #Quit except KeyboardInterrupt: GPIO.cleanup() GPIO.cleanup()
from subprocess import call #call(["pkill", "fbi"]) #call(["/home/pi/ttyecho", "-n", "/dev/tty1", "fbi -a -noverbose -d /dev/fb0 /home/pi/waiting.jpeg"]) call(["fswebcam", "-r 1280x768", "-s Gamma=50%", "/home/pi/image.jpg"]) call(["pkill", "fbi"]) call(["/home/pi/ttyecho", "-n", "/dev/tty1", "fbi -a -noverbose -d /dev/fb0 /home/pi/image.jpg"])
#!/usr/bin/env python import sys import time import RPi.GPIO as io import subprocess io.setmode(io.BCM) SHUTOFF_DELAY = 60 # seconds PIR_PIN = 25 # 22 on the board LED_PIN = 16 def main(): io.setup(PIR_PIN, io.IN) io.setup(LED_PIN, io.OUT) turned_off = False last_motion_time = time.time() while True: if io.input(PIR_PIN): last_motion_time = time.time() io.output(LED_PIN, io.LOW) print ".", sys.stdout.flush() if turned_off: turned_off = False turn_on() else: if not turned_off and time.time() > (last_motion_time + SHUTOFF_DELAY): turned_off = True turn_off() if not turned_off and time.time() > (last_motion_time + 1): io.output(LED_PIN, io.HIGH) time.sleep(.1) def turn_on(): subprocess.call("sh /home/pi/photoframe/monitor_on.sh", shell=True) def turn_off(): subprocess.call("sh /home/pi/photoframe/monitor_off.sh", shell=True) if __name__ == '__main__': try: main() except KeyboardInterrupt: io.cleanup()
Did you replicate this project? Share it!I made one
Love this project? Think it could be improved? Tell us what you think!