Tablogic End Year Presentation

On Wednesday we showed off Tablogic at the miLAB event, both doing live demos and showing our product video.

It was so exciting to finally show our product to people outside the lab, and it was amazing to see how each person saw the product in his line of work – from Office spaces to hospitals and even restaurants!

As we stated before, we are open-sourcing our code in order to give others the ability to build and modify our product to their personal use. The code for all modules can be found here.

We’d like to thank to all miLAB faculty members for the ideas, support, and the unique chance to create something new out of nothing.

See you in the future,

Ido & Dan.

Advertisements

Finalizing our product

There’s only about a week till the presentation, and we are doing all the final touches to put on the best show.

First of all, tragically our old prototype was broken apart. However, it gave us the “excuse” to print our final prototype in the 3D Printer!

It took a lot of splinters and filing, but eventually we got ourselves this beauty:

 

We are still trying to figure out how we should design the dashboard to allow functionality without reducing the accessiblity. Right now our dashboard tries to show some mock functions we can create with sufficient data:

All in all, we are feeling pretty confident about our product, and we can’t wait to show it off!

Final Mentors Presentation

Yesterday we had our final presentation in front of our mentors in preparation for the final event. In reality, this was our first AND last presentation in front of the course’s mentors, as Tablogic did not exist yet during the end of the first semester.

For our presentation, we wanted to show a pretty basic flow: Data is bringing many research fields forward, it is missing in the field of study experiences in a common study environment, and we want to change that – then simply explain how.

By now the prototype is ready and is already collecting some data occasionally. The only thing that’s missing is the dashboard. We rushed things up and Ido created a NodeJS server to retrieve the data in a proper JSON format as well as write it into DynamoDB through the unit, while Dan created the actual dashboard. The final result looks something like this:

IMG-20180514-WA0019.jpg

We also found a pretty sleek design for the presentation on Google Slides. Just to share a small taste of it:

final-presentation-slide.PNG

Looks much better than the previous infrastructure design, right?

We got a lot of great feedback for our presentation. Some of the mentors really got it, or at least understood the potential in it, and were very hyped for what’s to come of this project. On the same vein, others were underwhelmed by the kind of data analysis we currently provide, but we figured that’s actually good because it means there’s a lot of room for deeper analysis and progress to be made.

The main criticism, however, was that the presentation was not understandable and too technical for some people. For a team of 2 students from Computer Science in a Communications project, that is pretty much expected. We got a great idea from our mentors to use better defined persona to make it more understandable the next time, and hopefully we can clear things out to make our presentation clear and impressive for the final event.

Design Decisions

Our system has been running for a week and, despite a few technical issues with AWS and downtimes on Heroku, it has been running pretty smoothly – we have quite a bit of data and all sensors are working great.

Following a discussion with Daniel, our software mentor and instructor during the 1st semester, we realized that we should stick to hosting our Node server on Heroku rather than host a server over LAN with Raspberry PI – we don’t use a lot of traffic currently, and a remote server is clearly more dependable.

As we plan to build more units, we ran into a disturbing availability of Wemos microcontrollers. Daniel and Zvika (our hardware mentor) also suggested to use IOIO microcontroller, an Arduino-like controller that connects to an Android phone for WiFi connectivity, and Adafruit IO, an IoT platform created by Adafruit.

For now, we’re continuing to work on our web dashboard, as the client side is working pretty smoothly so far. Below are a few graphs from recorder data so far:

We hope that by next week, this data will be displayed on a web client (and calibrated, too).

Boxin’ Up

Since our previous prototype, we’ve decided to let go of AWS services. As we came closer to connecting our system, we noticed that we keep running into issues with setting up MQTT protocol-based connection, and passing the data into AWS DynamoDB is always the bottleneck because of it.

To resolve this, we’ve decided to create a REST API for our microcontroller to communicate with and send the data through. The server will write the data directly to AWS DynamoDB through a dedicated SDK, without going through the AWS IoT module, and then we can analyze the data on our own through various utilities found online.

With guidance from Oren Geva, the lab’s Industrial Designer, we’ve decided to create a unit for each seat rather than a single kit for the entire table. That way, as long as make the unit small enough, we can avoid having running cables across the table, and also have the benefit of cross-referencing data from multiple seats at the table. The simplest benefit of it is that we can tell if a person actually sits in front of a seat or not.

With that in mind, we are proud to present the concept design, thanks to Oren:

Concept Design from Oren Geva.jpg

The empty slots are intended for:

  • Screws to attach the unit to the table discreetly.
  • Micrphone.
  • Temperature and humidity sensor.
  • Slider – to indicate a requested input.
  • IR sensor – to indicate movements as well as presence in front of the seat (no presence=no movement at all for extended periods of time).

To make the units small enough, we’ve decided to replace the Arduino Uno with either an Arduino Nano connected to an ESP8266, or a Wemos microcontroller. The data will be collected for a researcher to analyze and offer insights to the students in order to improve their study experience.

First Prototype – From Arduino to AWS IoT Core through Raspberry Pi

Back when we started this current semester, our plan was to create a system that connects multiple clients based on Arduino microcontrollers with a central unit for the room that is able to send data and receive its processed results on AWS continuously. The system’s structure was detailed in this picture:

Wireframe Diagram

In order to build our first prototype, we wanted to fully implement one direction of communication from our system’s infrastructure: Create a fully operable Arduino -> Raspberry Pi -> AWS IoT Core data communication channel. To build this initial prototype, we ordered a wide variety of sensors, many of which arrived in a malfunctioning state.

We hooked up our initial prototype as follows: An Arduino Uno controller is positioned at the Milab class, with an IR distance sensor sticking to the edge of the table. The controller is connected to an HC-06 bluetooth pass-through module.

A Raspberry Pi 3 B is located at a nearby room, periodically trying to ping the HC-06 module. Once a connection has been established, the Arduino starts sending data periodically (currently every 0.5 seconds) to the Raspberry Pi, consisting of sensor reading and the ID of the sensor.

Once the Raspberry Pi receives a packet of data, it sends a mesasge to AWS IoT Core using a Thing Shadow representing the Raspberry Pi treated as a “room”. The message includes a field representing the ID of the Arduino controller, so data for all tables is stored at the same table for further processing, with an option to filter out a specific table.

2018-04-29-080821_1920x1080_scrot

In IoT Core, an Action Rule has been defined to store the received message’s data in a designed table corresponding to the current room, and hosted in the AWS DynamoDB service. Using the table, one can process and segment the receive data based on the keys and fields sent over time.

Screenshot from 2018-04-29 11-14-18

With this database continuously filled out, our initial prototype is complete. With that being said, we did run into a few problems with the current system:

  1. Sending a message every 0.5 seconds caused us to nearly fill out the free monthly quota in a couple hours. Since our system requires us to send tips and info in Real-Time, we sending data to AWS every 5-10 seconds, to keep our system responsive and yet vastly reduce the message load as we plan to add more sensors and tables to our system.A similar option being considered is gathering up data and sending it in sparser, larger messages, such as sending 60 bundled readings every half a minute.
  2. The initial run of the system has taught us that it is nearly impossible to keep a system running a script for extended periods of time without proper fail-safes. From exceptions caused by miscommunications, to power outages, to the Raspberry Pi randomly shutting off after a few hours, there are many problems that need to be addressed. We should plan to start the script as the controller boots up and reinitialize it whenever we identify that it stopped.
  3. The data is currently being transferred to AWS as a “Payload” JSON field, making it hard to process and filter out specific fields inside the Payload. We need to have a better understanding of how this data can be processed to store it in a useful manner on DynamoDB.
  4. Many of our current sensors are malfunctioning, and it requires re-evaluation of the data we can safely use for the next upcoming weeks, until we find replacements.

For next week, we hope to work on solutions for most of these issues, as well as pair up more sensors and possibly an additional Arduino controller to further analyze the behavior patterns in a room.

Preparing for the Future

Unfortunately we didn’t make a ton of progress lately, as we wait for the ordered sensors and being away from the lab for Passover. In the meantime, we wanted to start preparing for the eventual arrival of the kit, with what little sensors we have around here in stock.

We create a small version of our infrastructure, with an Arduino board connected to just one sensor – a microphone. The board is connecting to a Raspberry PI through bluetooth, and that in turn connects to AWS IoT through MQTT protocol and reports the information to a Thing Shadow.

We set up a policy in AWS IoT Core to store the data in a DynamoDB as soon as it arrives, with sensorId as the hash key, timestamp as the sort key, and an extra sensorRead entry for the actual reading of the mic.

The data is sent from the Arduino board as a String in the form of a JSON object. On the Node server hosted on Heroku, we’re parsing the String into an actual JSON object, and report it to the Thing Shadow which triggers the policy.

We are still considering how to best implement the data structure that is reported to the server, as well as resolve multiple issues with the communication while we wait for the new sensors to come.