Stream Delay - High Latency in Web Browser Stream

This gets mentioned a lot so here's how you mediate.

The web browser isn't designed to be used for an NVR interface so we have to use some processing and sometimes trickery to make it work this way. Generally you have to install an application to the operating system to view these kinds of systems. This is because an "installed application" sits closer to the hardware when running its functions, as well as having dedicated threads for its operations. A web browser has multiple other layers to make it usable for its intended purpose, surfing the web.

Additionally, Shinobi defaults to options driven for performance rather than latency. However you can change this at the expense of resources from your server and client. See "So the best options for low latency are" below as well as the notes following.

Stream Delay (High Latency in Web Browser Stream)

One major factor is  I-Frames (Keyframes) set in the camera's stream. If you choose not to encode (copy as video codec) then your camera will dictate this setting, usually from its web interface. In this instance set the I-Frame interval to match the FPS you are using. So if you are doing 15 FPS from the camera, set I-Frame to 15.

Although the latency is mainly because the dashboard you are looking at is in a web browser. There are multiple layers of processing upon the client-side CPU that occurs and that partially causes this delay, especially with H.264 based stream types.

So the best options for low latency are :

  • At the expense of slight delay but low CPU use, this also requires optimal network conditions.
    • Set the Stream Type (blue section of Monitor Settings) to Poseidon over Websocket .
  • At the expense of CPU power on the server you can lower this delay.
    • Set the Stream Type (blue section of Monitor Settings) to Base64 or MJPEG .

Things to note :

  • Poseidon over Websocket has possibility of stutter under poor network-to-server connectivity conditions.
  • Base64 has no stream limit, watch as many of them on the dashboard as you want.
  • MJPEG is limited by the browsers HTTP connection limit.
  • Both MJPEG and Base64 will use a large chunk of CPU per stream.

Short answer, No. Frames are generated as fast as possible when they are encoded. Object Detection and Motion Detection both require encoding of frames at this time.

Once a frame is created it will be sent to the detection engine for analysis. This is partly why you may witness detection results in real-time while the live video stream is a seconds behind.

You would need to use a Stream Type like MJPEG or Base64, however using these methods use a considerable amount of processing power.


All content is property of their respective owners.