Our EV3DEV / C++ Daisy Chain unlocks additional functionality

In our current layout, we have two Lego constructions that require more than 4 motors and we use the daisy chain function for that. In case you don’t know: the daisy chain functionality offers you the possibility to connect up to 4 EV3 bricks, one master and 1, 2 or 3 slave bricks. You program the master brick as usual, but you can also access the motors and sensors as if they were connected to the master brick. So you can access a maximum of 16 motors and the same number of sensors. That is the theory, in practice this is unfortunately not the case. The Lego software is very buggy. Most of the time, the sensors of the slave brick(s) are not seen by the master brick, motors are ok-ish. If you only use motors connected to the slave, it works most of the time. Of course we checked with our Mindstorms friends if there was a solution for this. Their answer: Lego knows about the problem, they are not fixing this. The solution: don’t use daisy chain ….

Anyway, while moving from the standard Mindstorms EV3 programming environment to the EV3DEV / C++ environment, we noticed that EV3DEV doesn’t support the daisy chain option. We searched the internet if somebody else had already implemented the daisy chain functionality in EV3DEV / C++, but that was not the case. So, we had the choice either to split the software and change the PC application (so we don’t need daisy chain), or to implement the daisy chain function in our EV3DEV software. We choose the last option. Of course, it is an additional challenge.

The principle that we want to follow, is basically the same as in the EV3 programming environment: you have one master brick running the application software and one or more slave bricks. The software running on the master brick should be able to access the sensors and motors on the slave brick(s) as if they would be connected to the master.

In order to achieve this, we needed to extend our motor and sensor library with additional methods. For example, you can create a motor like this for the master brick:

// Create a large motor at port C at the local brick (master)
EV3MotorLarge MasterMotorC = EV3MotorLarge(OUTPUT_C);

And we added the option to create a motor for the second, slave brick:

// Create a virtual brick, accessible at the specified IP address
// And create a large motor at port A at the virtual brick (slave)
slaveBrick remoteBrick("192.168.137.4");  
EV3MotorLarge SlaveMotorA = EV3MotorLarge(OUTPUT_A, std::make_shared<RemoteBrick>(slaveBrick));

Once the motors have been created, you can use a master or slave motor in the same way, e.g.

MasterMotorC.OnDegrees(100, 360, Backward);
SlaveMotorA.OnDegrees(100, 360, Forward);

So far, so good and nothing special yet. But as you can see, the creation of the virtual brick is based on an IP address. That implies, that you can also have a slave brick that is not physically connected by an USB cable. If it is connected via Bluetooth or WiFi, it also works! And the number of slave bricks is NOT restricted to a total of 4 (1 master + 3 slaves). In theory, you could have an infinite number of slave bricks. Of course, there is a limit and that will have to do with the performance. I don’t have enough free bricks available to test the performance with 4+ bricks. Something for my backlog ;-).

Apart from the extended number of slave bricks, we have also added the option to access (from the master program) the LED lights on the slave bricks, the sound and the LCD display. In fact, everything we can do on the master brick, we can do on the slave brick(s).

How did we manage to do this? When the remoteBrick class is created, a TCP connection is setup between the master brick and the slave brick. On each slave brick, a generic ‘server’ program is running that accepts commands from the master brick. All commands that need to be executed on the slave brick, are send via a simple protocol by serializing the command into a string (e.g. “CreateLargeMotor,Output_A” or “MotorOnDegrees,100,360,OutputA,Forward”). On the slave side, the string is de-serialized and then executed.

In the current implementation, the server program needs to be started on the slave brick(s) manually. This will also be automated: when the remoteBrick class is created, it will start the server program automatically. Just work in progress ;-).

Lego Mindstorms EV3 with an image with 4 different shades of gray

As mentioned in the previous post, any EV3 is capable of displaying 4 different shades of gray. However, this functionality is not available in the standard Lego programming environment.

When using EV3DEV in combination with C++, there are no libraries available (or I can’t find them 😉 ) to upload easily an image to the LCD screen. You need to write the right values to a screen buffer in order to display an image. But if you do it right, you can get an image like this on the EV3:

A standard Lego Mindstorms EV3 can display four different shades of gray (click on the picture for a short video)

I wrote a simple C# program that scans the complete image, pixel by pixel from left to right and from top to bottom. The image pixel values are converted into a array with the EV3 pixel values.

  ..

  const int MaxDisplayX = 178;
  const int MaxDisplayY = 128;

  Bitmap myBitmap = new Bitmap("Example picture.png");
  Color pixelColor;

  // Start of the array initialization
  System.Console.WriteLine("unsigned short int imageArray[] = {" + Environment.NewLine);

  // Get the color of a pixel within myBitmap.
  for (int y = 0; y < MaxDisplayY; y++)
  {
    for (int x = 0; x < MaxDisplayX; x++)
    {
      pixelColor = myBitmap.GetPixel(x, y);

      // RGB values are always the same, so doesn't matter if I read R, G or B
      switch (pixelColor.R)
      {
        case 0: // Black
          System.Console.Write("0x0000");
          break;
        case 85: // Dark Gray
          System.Console.Write("0x4949");
          break;
        case 170: // Light Gray
          System.Console.Write("0x9292");
          break;
        case 255: // White
          System.Console.Write("0xFFFF");
          break; 
        default:
          break;
      }

      // No ; at the end of the last array element
      if (!((x == (MaxDisplayX - 1)) && (y == (MaxDisplayY - 1))))
      {
        System.Console.Write(", ");
      }
    }
    System.Console.WriteLine(Environment.NewLine);
  }

  // End of the array initialization
  System.Console.WriteLine("};" + Environment.NewLine);
}

Note: the C# program can only convert 4 different values to the pixel array, so the input should be an image that has already been converted into a 4 grayscale image. The image should also have the exact size of the screen, i.e. 178 x 128 pixels. I have only tested it on one grayscale image, so I don’t know if the four grayscale values (0, 85, 170, 255) are always the same four.

The output of the C# program looks like this …

unsigned short int imageArray[] = {

  0x4949, 0x4949, 0x4949, 0x4949, 0x4949, ... etc
  ...
  ...

};

… and is written to the file “imagearray.h”. That file used in the C++ program that runs on the EV3:

..
..
#include "imagearray.h"
..
..
{
  int fbfd = 0;
  char* fbp = 0;

  long int screensize = 0;
  struct fb_var_screeninfo vinfo;
  struct fb_fix_screeninfo finfo;

  fbfd = open("/dev/fb0", O_RDWR);
  if (fbfd == -1)
  {
    exit(-1);  
  }

  if (ioctl(fbfd, FBIOGET_FSCREENINFO, &finfo))
  {
    exit(-1);
  }

  /* Get variable screen information */
  if (ioctl(fbfd, FBIOGET_VSCREENINFO, &vinfo))
  {
    exit(-1);
  }

  /* Figure out the size of the screen in bytes */
  screensize = vinfo.xres * vinfo.yres * vinfo.bits_per_pixel / 8;

  fbp = (char*)mmap(0, screensize, PROT_READ | PROT_WRITE, MAP_SHARED, fbfd, 0);
  if ((int)fbp == -1)
  {
    exit(-1);
  }

  // Iterate over the number of array elements
  // Note: sizeof gives the total number of bytes, therefore we
  //       have to divide by the size of one array element
  for (int i = 0; i < sizeof(imageArray) / sizeof(imageArray[0]); i++)
  {
    // one pixel = 4 bytes in the screen buffer 
    *((unsigned short int*)(fbp + i * 4)) = imageArray[i];
  }
  munmap(fbp, screensize);

  ..
  ..

That’s all. Simple as that.

 

Impression of our visit to Lego World 2017 in Utrecht

From Tuesday October 18 until Saturday October 21, our layout was present at Lego World 2017 in Utrecht, the Netherlands.

Photos of our layout can be viewed on Flickr, please click on the photo below to be teletransported to Flickr.

 

And on Youtube you will find two video impressions:

 

If you like our layout, please like our videos.

Enjoy, Sioux.NET on Track

How to? EV3’s in Daisy Chain mode plus WiFi

If you have two or more Lego Mindstorms EV3’s in daisy chain mode, it is not possible to use a Wifi connection with the EV3 as well. For our project, we need this functionality. Two embedded software engineers in our team are now updating the firmware to make this work. But are we going to be in time….? From a project management perspective, it is always wise to have a fallback scenario. But is there one….?

Continue reading “How to? EV3’s in Daisy Chain mode plus WiFi”

Loading the train with the 6-axis DOF robot arm

I wrote a small test program for the robot arm to load the train with two containers. Loading the two wagons was done in one minute, so much faster than the candy crane.

Have a look at the video and please share with me what you think of it.

Color selector for Lego World 2016

All years, we use a color selector device (we call it the PUI, the Physical User Interface) as the starting point of our track. Over the past years, the PUI looked like this:

PUI 2012 - 2015

Continue reading “Color selector for Lego World 2016”