miki123211 2 hours ago | next |

Are there any good resources on how this kind of real-time programming is done?

What goes into ensuring that a program is actually realtime? Are there formal proofs, or just experience and "vibes"? Is realtime coding any different from normal coding? How do modern CPU architectures, which have a lot of non-constant time instructions, branch prediction, potential for cache misses and such play into this?

throwup238 an hour ago | root | parent | next |

> What goes into ensuring that a program is actually realtime?

Realtime mostly means predictable runtime for code. As long as its predictable, you can scale the CPU/microcontroller to fit your demands or optimize your code to fit the constraints. It’s about making sure your code can always respond in time to hardware inputs, timers, and other interrupts.

Generally the Linux kernel’s scheduling makes the system very unpredictable. RT linux tries to address that along with several other subsystems. On embedded CPUs this usually means disabling advanced features like cache and speculative execution (although I don’t remember if RT handles that part since its very vendor specific).

juliangmp 42 minutes ago | root | parent | prev | next |

I'm not hugely experienced in the field personally, but from what I've seen, actually proving hard real time capabilities is rather involved. If something is safety critical (think break systems, avionic computers, etc.) it likely means you also need some special certification or even formal verification. And (correct me if I'm wrong) I don't think you'll want to use a Linux kernel, even with the preempt rt patches. I'd say specialized rt operating systems, like FreeRTOS or Zephyr, would be more fitting (though I don't have direct experience with them).

As for the hardware, you can't really use a ‘regular’ CPU and expect completely deterministic behavior. The things you mentioned (and for example caching) absolutely impact this. iirc amd/xilinx actually offer a processor that has both regular arm cores, alongside some arm real time cores for these exact reasons.

actionfromafar an hour ago | root | parent | prev | next |

For things like VxWorks, it's mostly vibes and setting priority between processes. But there are other ways. You can "offline schedule" your tasks, i.e. you run a scheduler at compile time which decides all possible supported orderings and how long slots each task can run.

Then, there's the whole thing of hardware. Do you have one or more cores? If you have more than one core, can they introduce jitter or slowdown to each other accessing memory? And so on and so forth.

tonyarkles 7 minutes ago | root | parent | next |

> it's mostly vibes and setting priority between processes

I'm laughing so so hard right now. Thanks for, among other things, confirming for me that there isn't some magic tool that I'm missing :). At least I have the benefit of working on softer real-time systems where missing a deadline might result in lower quality data but there's no lives at risk.

Setting and clearing GPIOs on task entry/exit are a nice touch for verification too.

rightbyte an hour ago | root | parent | prev |

> If you have more than one core, can they introduce jitter or slowdown to each other accessing memory?

DMA and fancy peripherals like UART, SPI etc, could be namedropped in this regard, too.

rightbyte an hour ago | root | parent | prev |

On all the real time systems I've worked on, it has just been empirical measurements of cpu load for the different task periods and a good enough margin to overruns.

On an ECU I worked on, the cache was turned off to not have cache misses ... no cache no problem. I argued it should be turned on and the "OK cpu load" limit decreased instead. But nope.

I wouldn't say there is any conceptual difference from normal coding, except for that you'd want to be kinda sure algorithms terminate in a reasonable time in a time constrained task. More online algorithms than normally, though.

Most of the strangeness in real time coding is actually about doing control theory stuff is my take. The program often feels like state-machine going in a circle.

tonyarkles 2 minutes ago | root | parent |

> On an ECU I worked on, the cache was turned off to not have cache misses ... no cache no problem. I argued it should be turned on and the "OK cpu load" limit decreased instead. But nope.

Yeah, the tradeoff there is interesting. Sometimes "get it as deterministic as possible" is the right answer, even if it's slower.

> Most of the strangeness in real time coding is actually about doing control theory stuff is my take. The program often feels like state-machine going in a circle.

Lol, with my colleagues/juniors I'll often encourage them to take code that doesn't look like that and figure out if there's a sane way to turn it into "state-machine going in a circle". For problems that fit that mold, being able to say "event X in state Y will have effect Z" is really powerful for being able to reason about the system. Plus, sometimes, you can actually use that state machine to more formally reason about it or even informally just draw out the states, events, and transitions and identify if there's anywhere you might get stuck.

alangibson an hour ago | prev | next |

This is big for the CNC community. RT is a must have, and this makes builds that much easier.

dale_glass an hour ago | root | parent |

Why use Linux for that though? Why not build the machine like a 3D printer, with a dedicated microcontroller that doesn't even run an OS and has completely predicable timing, and a separate non-RT Linux system for the GUI?

juliangmp 31 minutes ago | root | parent |

I feel like Klippers approach is fairly reasonable, let an non-RT system (that generally has better performance than your micro controller) calculate the movement but leave the actual commanding of the stepper motors to the micro controller.

taeric 3 hours ago | prev | next |

Sounds exciting. Anyone recommend a good place to read what the nuances of these patches are? The zdnet link about the best, at the moment?

jovial_cavalier an hour ago | prev | next |

A few months ago, I played around with a contemporary build of preempt_rt to see if it was at the point where I could replace xenomai. My requirement is to be able to wake up on a timer with an interval of less than 350 us and do some work with low jitter. I wrote a simple task that just woke up every 350us and wrote down the time. It managed to do it once every 700us.

I don't believe they've actually made the kernel completely preemptive, though others can correct me. This means that you cannot achieve the same realtime performance with this as you could with a mesa kernel like xenomai.

netdur 3 hours ago | prev |

[flagged]

osamagirl69 2 hours ago | root | parent | prev | next |

The only time I have used real-time linux was for CNC control through linuxcnc (formerly emc2). https://linuxcnc.org/

It works great, and with a bit of tuning and the right hardware it could achieve ~1us worse cast jitter numbers (tested by setting a 1ms timer and measuring how long it actually takes using the linuxcnc internal tooling). Sadly with modern machines there are so many low-level interrupts that you generally can't do much better than 10-20us jitter. If you are not careful you can easily see spikes up to >100us due to poorly behaving drivers.

gorbypark an hour ago | root | parent |

Came here to say basically the same thing. Linux CNC on an old PC with a parallel port can do some amazing things!

ctoth 2 hours ago | root | parent | prev | next |

Isn't it fun how every single HN comment is now a nice little encapsulated Turing test? Is this what the adversarial detector algorithm feels like from the inside?

lawlessone 2 hours ago | root | parent |

good i'm not the only one thinking this. that last line prompting for replies was odd.

edit: and it basically paraphrased the article..

miki123211 2 hours ago | root | parent |

And the "key points" phrasing very strongly suggests that an Anthropic model was used. It's a telltale sign for those, just like Delve is (was) for Open AI.