Our EV3DEV / C++ Daisy Chain unlocks additional functionality

In our current layout, we have two Lego constructions that require more than 4 motors and we use the daisy chain function for that. In case you don’t know: the daisy chain functionality offers you the possibility to connect up to 4 EV3 bricks, one master and 1, 2 or 3 slave bricks. You program the master brick as usual, but you can also access the motors and sensors as if they were connected to the master brick. So you can access a maximum of 16 motors and the same number of sensors. That is the theory, in practice this is unfortunately not the case. The Lego software is very buggy. Most of the time, the sensors of the slave brick(s) are not seen by the master brick, motors are ok-ish. If you only use motors connected to the slave, it works most of the time. Of course we checked with our Mindstorms friends if there was a solution for this. Their answer: Lego knows about the problem, they are not fixing this. The solution: don’t use daisy chain ….

Anyway, while moving from the standard Mindstorms EV3 programming environment to the EV3DEV / C++ environment, we noticed that EV3DEV doesn’t support the daisy chain option. We searched the internet if somebody else had already implemented the daisy chain functionality in EV3DEV / C++, but that was not the case. So, we had the choice either to split the software and change the PC application (so we don’t need daisy chain), or to implement the daisy chain function in our EV3DEV software. We choose the last option. Of course, it is an additional challenge.

The principle that we want to follow, is basically the same as in the EV3 programming environment: you have one master brick running the application software and one or more slave bricks. The software running on the master brick should be able to access the sensors and motors on the slave brick(s) as if they would be connected to the master.

In order to achieve this, we needed to extend our motor and sensor library with additional methods. For example, you can create a motor like this for the master brick:

// Create a large motor at port C at the local brick (master)
EV3MotorLarge MasterMotorC = EV3MotorLarge(OUTPUT_C);

And we added the option to create a motor for the second, slave brick:

// Create a virtual brick, accessible at the specified IP address
// And create a large motor at port A at the virtual brick (slave)
slaveBrick remoteBrick("192.168.137.4");  
EV3MotorLarge SlaveMotorA = EV3MotorLarge(OUTPUT_A, std::make_shared<RemoteBrick>(slaveBrick));

Once the motors have been created, you can use a master or slave motor in the same way, e.g.

MasterMotorC.OnDegrees(100, 360, Backward);
SlaveMotorA.OnDegrees(100, 360, Forward);

So far, so good and nothing special yet. But as you can see, the creation of the virtual brick is based on an IP address. That implies, that you can also have a slave brick that is not physically connected by an USB cable. If it is connected via Bluetooth or WiFi, it also works! And the number of slave bricks is NOT restricted to a total of 4 (1 master + 3 slaves). In theory, you could have an infinite number of slave bricks. Of course, there is a limit and that will have to do with the performance. I don’t have enough free bricks available to test the performance with 4+ bricks. Something for my backlog ;-).

Apart from the extended number of slave bricks, we have also added the option to access (from the master program) the LED lights on the slave bricks, the sound and the LCD display. In fact, everything we can do on the master brick, we can do on the slave brick(s).

How did we manage to do this? When the remoteBrick class is created, a TCP connection is setup between the master brick and the slave brick. On each slave brick, a generic ‘server’ program is running that accepts commands from the master brick. All commands that need to be executed on the slave brick, are send via a simple protocol by serializing the command into a string (e.g. “CreateLargeMotor,Output_A” or “MotorOnDegrees,100,360,OutputA,Forward”). On the slave side, the string is de-serialized and then executed.

In the current implementation, the server program needs to be started on the slave brick(s) manually. This will also be automated: when the remoteBrick class is created, it will start the server program automatically. Just work in progress ;-).

(Almost) Final version of the new container loader

In the video, you see the (almost) final version of the container loader. The only thing that is missing, is the power cable carrier.

What is new in this version (apart from finishing the build)? First of all, it has been added to the ‘real’ conveyor belt (you can see the enormous length ;-). This conveyor belt moves the containers from the warehouse to the wagons.

Furthermore, an ultrasonic sensor has been added at the top of the super structure. This ultrasonic sensor detects if a container has passed under it, so it ‘knows’ that after 1 second it can set its state to ‘container delivered’. Without this sensor, the only way to ‘ensure’ if the container has been loaded, was to use a predefined time frame. A predefined time frame has two major drawbacks: you need a very long time to make sure that the container has arrived. And you can not guarantee that the container will be delivered, no matter how long you define the time frame. You can see the ultrasonic sensor detection in close up around time frame 0:45.

Lego Mindstorms EV3 with an image with 4 different shades of gray

As mentioned in the previous post, any EV3 is capable of displaying 4 different shades of gray. However, this functionality is not available in the standard Lego programming environment.

When using EV3DEV in combination with C++, there are no libraries available (or I can’t find them 😉 ) to upload easily an image to the LCD screen. You need to write the right values to a screen buffer in order to display an image. But if you do it right, you can get an image like this on the EV3:

A standard Lego Mindstorms EV3 can display four different shades of gray (click on the picture for a short video)

I wrote a simple C# program that scans the complete image, pixel by pixel from left to right and from top to bottom. The image pixel values are converted into a array with the EV3 pixel values.

  ..

  const int MaxDisplayX = 178;
  const int MaxDisplayY = 128;

  Bitmap myBitmap = new Bitmap("Example picture.png");
  Color pixelColor;

  // Start of the array initialization
  System.Console.WriteLine("unsigned short int imageArray[] = {" + Environment.NewLine);

  // Get the color of a pixel within myBitmap.
  for (int y = 0; y < MaxDisplayY; y++)
  {
    for (int x = 0; x < MaxDisplayX; x++)
    {
      pixelColor = myBitmap.GetPixel(x, y);

      // RGB values are always the same, so doesn't matter if I read R, G or B
      switch (pixelColor.R)
      {
        case 0: // Black
          System.Console.Write("0x0000");
          break;
        case 85: // Dark Gray
          System.Console.Write("0x4949");
          break;
        case 170: // Light Gray
          System.Console.Write("0x9292");
          break;
        case 255: // White
          System.Console.Write("0xFFFF");
          break; 
        default:
          break;
      }

      // No ; at the end of the last array element
      if (!((x == (MaxDisplayX - 1)) && (y == (MaxDisplayY - 1))))
      {
        System.Console.Write(", ");
      }
    }
    System.Console.WriteLine(Environment.NewLine);
  }

  // End of the array initialization
  System.Console.WriteLine("};" + Environment.NewLine);
}

Note: the C# program can only convert 4 different values to the pixel array, so the input should be an image that has already been converted into a 4 grayscale image. The image should also have the exact size of the screen, i.e. 178 x 128 pixels. I have only tested it on one grayscale image, so I don’t know if the four grayscale values (0, 85, 170, 255) are always the same four.

The output of the C# program looks like this …

unsigned short int imageArray[] = {

  0x4949, 0x4949, 0x4949, 0x4949, 0x4949, ... etc
  ...
  ...

};

… and is written to the file “imagearray.h”. That file used in the C++ program that runs on the EV3:

..
..
#include "imagearray.h"
..
..
{
  int fbfd = 0;
  char* fbp = 0;

  long int screensize = 0;
  struct fb_var_screeninfo vinfo;
  struct fb_fix_screeninfo finfo;

  fbfd = open("/dev/fb0", O_RDWR);
  if (fbfd == -1)
  {
    exit(-1);  
  }

  if (ioctl(fbfd, FBIOGET_FSCREENINFO, &finfo))
  {
    exit(-1);
  }

  /* Get variable screen information */
  if (ioctl(fbfd, FBIOGET_VSCREENINFO, &vinfo))
  {
    exit(-1);
  }

  /* Figure out the size of the screen in bytes */
  screensize = vinfo.xres * vinfo.yres * vinfo.bits_per_pixel / 8;

  fbp = (char*)mmap(0, screensize, PROT_READ | PROT_WRITE, MAP_SHARED, fbfd, 0);
  if ((int)fbp == -1)
  {
    exit(-1);
  }

  // Iterate over the number of array elements
  // Note: sizeof gives the total number of bytes, therefore we
  //       have to divide by the size of one array element
  for (int i = 0; i < sizeof(imageArray) / sizeof(imageArray[0]); i++)
  {
    // one pixel = 4 bytes in the screen buffer 
    *((unsigned short int*)(fbp + i * 4)) = imageArray[i];
  }
  munmap(fbp, screensize);

  ..
  ..

That’s all. Simple as that.

 

Mindstorms EV3 without modification has 4 level grayscale display

Until now, we have programmed the EV3 bricks with the standard Lego programming software. Using messages to send commands from the PC and to retrieve status information from the EV3’s (this was not trivial, see this article https://siouxnetontrack.wordpress.com/2014/08/27/sending-data-over-wifi-between-our-pc-application-and-the-ev3-part-4/).

Programming the EV3’s with the standard programming language has as anything pros and cons. A major drawback, is that you can not do complex math. So we decided to convert all EV3 programs to EV3DEV and the C++ language.

I started this weekend with learning the EV3DEV environment and wrote my first, small test programming (pressing a touch sensor = motor rotates). Reading the manuals, learned me that you can do much more with the EV3 than with the standard programming environment.

For example, you can address the two LED’s on the brick separately. And it can also display four different grayscales on the display:

And this is all standard available!

Watch the short video on Youtube to see these features in action (click on the photo above).

Some useful links to get started:

Delta Crane will be replaced by Container Loading Station

For our layout in 2020, we will replace the Delta Crane, the module that was responsible for loading the containers from the conveyor belt to the wagons. You can see the Delta Crane in action in the following video (start at time frame 3:09):

One of the biggest disadvantages of the crane, is its speed. Or its slowness, that is a better description. In order to decrease the total running time (from color selection to candy delivery), loading of the wagons is now one of the bottlenecks.

So, I started to think about a new way of loading the train. How do I get the containers from the belt, to the wagons ….?

Another conveyor belt? No, to straight forward.

A push mechanism (kind of reverse of the delivery station)? No, been there, done that.

A robot arm? Yeah, that could work but is that special enough?

(quite some time passing by …)

And then I came with something completely new, as you can see in the following video:

At the left, you can see (a simplified part of) the conveyor belt, on the right you see the (simplified) train. The superstructure in the middle is able to move back and forth, so it can reach the four wagons without moving the train. In this first prototype, I can only move the superstructure by hand. But of course, this will be automated as well using sensors to detect the 4 wagon positions.

You can find renders of the final result at our Flickr page (click on the photo below).

Result of our work in 2019

A new video has been uploaded to our Youtube channel. In 2019, lots of new elements have been added to our layout. To name a few: the warehouse, able to store 60 containers with candies, with two independent stacker cranes, the four candy circles, and an updated delivery station. Also brand new is the PC software that connects everything. We have worked hard to get the software working stable and with success. You can see the result in the video. Enjoy the video!

 

 

Lego Monorail EV3 – Automated Switch

A monorail without a switch track is not a real monorail 😉

I have been working on an automatic switch for the Lego Mindstorms EV3 monorail. The idea is that there will be a monorail track on our layout with two reverse loops. The loops will each have a switch and the train will need to set the switch in the right position. One reverse loop will be at the delivery station, where the empty containers can be loaded onto the monorail. And the second reverse loop will be at the Candy warehouse, where the empty containers will be dropped. For this year, the (un)loading of the empty containers will be manually. It just saves time to walk between the two locations with the empty containers. Yes, I know this sounds lazy and yes, it is.

In the video below, you see version 1.0 of the automated switch. Currently, the switch is powered by a PF motor. The train was simply programmed to run and switch direction when it noticed a the green tile (on the left, not visible in the video), a red or a yelow tile. Meanwhile, I was operating the switch with a PF remote control.

The next update will be a finished reverse loop, including a second EV3 that controls the switch. The train-EV3 will communicate with the Switch-EV3 to set it in the right position.

 

Sioux.NET on Monotrack … ?

One of the “policies” within Sioux.NET on Track, is to change a build every three years. In other words, when a build has been part of a Lego World demo for three years, it should be replaced by a new one. For example, loading the train was first done by the container crane, now it is done by the delta crane. The same applies for the train: the first years, we controlled the train by an NXT, now it is controlled by an EV3. For the new layout, I am thinking of replacing the train by a monorail (and thus renaming the group to “Sioux.NET on Monotrack” ;-).

You can following the engineering process on Eurobricks.

Enjoy, Hans

Toypro published interview with our project

For our Lego builds, we are always in a great need of bricks. Lots of bricks. Bricklink is for that our main resource and it is a challenge to find the right bricks for a reasonable price and preferable at one shop to avoid too much shipment costs.

One of the best shops in town (literally, their main office is in Nederweert, quite close to Sioux in Eindhoven) is Toypro. They have a newsletter and we are published in their latest newsletter.

Click here to go the article in English or here to read it in Dutch. In total 6 languages are available.

Enjoy reading.