C#.net API for Fologram

Hi @Gwyll, I was wondering where we could find the APIs for the Fologram? Could you please tell me?

Hey @Jooooo

While Fologram is built on top of our own internal API, we don’t have public documentation or examples for this. What were you hoping to achieve with the API that you couldn’t get to work with the existing library of grasshopper components?

Hey @Gwyll,

for Fologram a Point Cloud needs to be converted into a Mesh/Brep.
Since the reconstruction with GH components requires a lot of time, I am currently looking into the KinectFusionExplorer code. But still not sure, how to convert the output of the KinectFusion which is of type Microsoft.Kinect.Fusion.Mesh to Rhino.Geometry.Mesh by keeping the “real-time” performance…

Then, I saw in the Change Log 2020.1 “Public C#.net API”. So, my second option would be to write a small component with those APIs so that Fologram allows a raw Point Cloud. Not sure which option requires less workload. Any chance that I look into the APIs or the script from https://vimeo.com/277916431?

Hey @Jooooo

The API wouldn’t let you create a new data type, only make calls to existing sync functions (sending meshes, sending text, sending materials, getting device transforms, getting hand transforms, getting controller transforms etc). from within c#.

Creating the mesh from the point cloud data (as seen in that video) can be completed much faster then the time it takes for grasshopper to compute and redraw a new solution. This overhead gives you a maximum frame rate for any dynamic mixed reality app you are creating from grasshopper (about 12-15 fps) So you should be able to create meshes dynamically from the kinect data and stream them over if you’re happy with this frame rate. The video you linked to was created without using our API, though we did write a C# component to more efficiently create the output mesh.

PointCloudStream_internalized.gh (5.0 KB)

You should be able to easily customize / extend this to stream only tris and no colours if you want to reduce the mesh conversion time. Or customize the normals of the quads etc.

You will need Tarsier for the Point Cloud parameter component though this will also be useful for streaming your Kinect data. https://www.food4rhino.com/app/tarsier

1 Like

Hi @Gwyll,

Wow, I really appreciate your support. Yes, this can be a really good starting point for my script :slight_smile: I still have one question. By playing around with the resolution of the Kinect camera, I could adjust the number of points that are pushed to hololens. If around 15.000 points are pushed, the network status from the Fologram app is showing the below status:
image

The visualization in hololens is delayed for 4~5 seconds. Although the visualization in Rhino doesn’t have any delay. My question is whether the current up date rate from the network status is showing the required data rate for the captured mesh or is it really showing the current data rate?
I would be confused, if it is the latter case, because when the current up date rate from the network showed 15 Mb/s, I didnt have any delay in Hololens, while my network only has a upload speed of 2 Mb/s according to a speed test.

Hi @Jooooo,

Woah! That’s a lot of data going over the network. Let’s clarify a few things about the numbers first:

  1. The speeds that are listed here are in megabits, which is megabytes x8 (though this is normal for networking, and likely the same as in your speed test)
  2. Assuming the speed test you ran is conventional (i.e. speedtest.net or similar) - these are designed to test your internet speed, but not your local network speed. Fologram transfers data directly from your PC to your device (via your router) rather than bouncing to a central server and back (for exactly that reason - it would be too slow!)

As for the delay, I’m not surprised that there is some with that throughput. What’s going on in the background is:

  1. Grasshopper creates the geometry
  2. In the background (without slowing grasshopper, at least for a while) we:
    a. Convert the geometry to a format suitable for network transfer
    b. Compress it
    c. Send it over the network
    d. Unpack it
    e. Convert it to a format that can be displayed on the device.
    f. Render it when the device is ready.

Because (2) takes place in the background, it has minimal effect on the performance of Grasshopper, which is why you’re finding that those updates are happening quickly

If you’re creating geometry faster than the process in (2) can take place, you’ll get a backlog which can cause the delay.

(f) is also an important one here - because the device will naturally be slower than your PC, it can take longer to unpack the geometry than it was to pack to begin with. This means that even if all other parts are running efficiently the device can end up with a processing backlog, causing a delay.

You’ll likely find that if you let the device catch up by stopping sending for a while, the first frame you send again will render quickly, and the delay will build up over time. For example:

An efficient scenario:
Grasshopper is refreshing every 100ms. It takes 80ms to pack, send and unpack on the device. This means that we can send as much as we want, because our processing and transfer time is less than the refresh rate.

An inefficient scenario:
Grasshopper is refreshing every 100ms. It takes 110ms to pack, send and unpack on the device. At first, it seems correct because the frames are only 10ms, 20ms, 30ms… behind. But after 10 seconds, we are 10ms x 10 frames x 10 seconds = 1 second behind.

So essentially if you’re slightly over what your network and device can handle, it adds up!

Essentially your options are:

  1. Reduce the size of the data you’re transferring (so it can be packed, transferred and unpacked faster)
    or
  2. Send the data less frequently, so there’s more time to pack, transfer and unpack the frames.

Cheers

Hey @Cam,

ok, these are really valuable tips. Really appreciate this clarification. To sum up:

My scenario:
Grasshopper is refreshing every 50ms and the Current up data rate from Fologram is 22 Mb/s. I can see that the scene in Hololens is delayed. According to your description, there are two reasons for this delay.

  1. 2c) Sending the mesh over the network to hololens takes too much time (as you can see below, my GH script needs like 7~10ms)
    -> However, I am using a Wifi 5 router. So, I assume that the upload speed of my router should be faster than 22 Mb/s.

  2. Or my device Hololens 1 is too slow.
    -> Not sure about this… since there are projects where similar works have already conducted (https://www.youtube.com/watch?v=Wc5z9OWFTTU , https://www.youtube.com/watch?v=7d59O6cfaM0).

Hey @Jooooo
A few clarifications:

  • The 5ms compute time in Grasshopper that you see from your definition isn’t the same as the refresh rate for your definition - it usually takes another 40-50ms+ for the Rhino viewport to refresh. You can turn on a timer to get a real number for frames per second by going to Fologram > Show FPS in Grasshopper.
  • Because the work Fologram doesn’t in the background is off the main thread (to not slow down your definition while waiting for data to be sent and acknowledged) it won’t show up in the Grasshopper profiler.
  • Your router is unlikely to be the limiting factor - it is more likely the combined process of the steps in (2), particularly unpacking and building the geometry.

And for the examples:

  1. Microsoft: This one is a very optimized proof of concept with a lot of tricks in the background! They are live remeshing, and sending meshes and textures rather than point clouds. I suspect they also save a lot of bandwidth with low resolution textures.
  2. Guitar player: This looks about the performance you would expect from a simple semi-optimized point cloud in a dedicated app - around 10fps.

And a couple of other things which are worth mentioning that distinguish point cloud streaming from video calls:

  1. Video calls are typically lossy - they can drop frames without issue. In theory point cloud streaming can also do this, but without knowing the application it’s hard to say if this is appropriate - Fologram does not drop any frames and serves data reliably only.
  2. Video calls are compressed, and typically adapt quality on the fly based on network performance - It’s easy to downscale and encode video - this is done at a hardware level and is commonplace, i.e. H.264. This is much harder to do with point clouds - how would you know which information you can and can’t keep? There are some recent algorithms for this (Draco) but this also introduces encoding and decoding costs.

In the guitar example, each point would be represented by XYZ,ARGB = 16 bytes, which get quickly loaded into a point cloud. Using meshes to do this, (assuming one triangle per face) we have XYZx3, ABC, ARGBx3 = 60 bytes per point (add 36 if normals are included). These then need to be interpreted as a mesh (creation of faces, adding normals), rather than a point cloud, which adds overhead.

The Tl;DR - it’s reasonable that you would expect better performance from a purpose built point-cloud streaming application, particularly where point clouds are natively used rather than meshes.