[P25nx] Welcome
Bryan Fields
Bryan at bryanfields.net
Fri Jul 27 15:22:05 EDT 2018
On 7/27/18 2:56 PM, Jamison Judge wrote:
>> Oh dear god, don't let cisco do multicast, they suck total ass doing it.
> THANK YOU…… Using cisco’s gre multipoint for this type of traffic is also a
> desperate plea for disaster.
It's one of those things that if you don't really know how the router works in
production. It seems like a good idea, and really cool since it throws it on
the network and lets the network do talkgroup routing. That being said,
multicast is poorly understood even by networking people.
In classic IOS, multicast is process switched in the CPU, and any packets that
are crypto for multicast must take two trips through the CPU. This is slow,
and then multicast replication happens, and the way the 1800/2800/etc routers
do it is via memory copy. This means for each destination of multicast a
packet comes in, takes two trips through the CPU, then a copy is built for
each destination and the packet assembled in ram (say 80 packets now). Each
packet takes two trips back through the CPU and out to the network.
Now the CPU of a 1800/2800 is a PowerPC at like 200 mhz. Throw 50 packets per
second at it and it's got trouble keeping up with 20k copies per second (and
do all that other stuff like you know, maintain OSPF and a FIB). A RPI3 has 4
cores running at 1200 MHz, or 6x faster. Linux also has a better optimized
network stack than Cisco.
If we talk about routers that don't suck (like a Nokia :), they do multicast
in the switch fabric with dedicated ASIC's and can replicate 1tb/s flows to
thousands of endpoints.
>> Our first work was doing programed IO at 9600 bps, and we just couldn't
>> keep up
> Interesting. Out of curiosity, was this using Go? I’ve never tried
> high-level synchronous bit banging in practice, but until I read this, my
> gut would have said that rate would be no problem.
Yep it was in Go talking to a FTDI controller. The issue was no hardware
buffers. If you drop one clock the entire data stream is fucked. We were
running at 30% cpu on a pi3 to make it work. Even with a normal serial port
you have a UART, which is a dedicated controller that can buffer 16 bytes.
With the sync serial we need a dedicated controller to convert it to normal
serial. The CPU only does one thing, and at 9600 bps as a 20mhz cpu clock,
it's got like 1000 instructions it can perform between clock pulses. Also
the PIC has a 8 byte buffer on the async side, which further frees it up to do
stuff in real time.
>> What I propose is to have a core network of servers/VM's, 4 to 8 world
>> wide. These run linux and have GRE tunnels between them with OSPF for
>> unicast and MOSPF/PIM SM for multicast.
>
> I think it’s worth weighing the cost/benefit of a multicast infrastructure
> at all. Is the promise of theoretical "infinite scalability” worth all the
> complexity that’s introduced? From a pragmatic, here-and-now standpoint, we
> have a relatively small number of spokes that may very well naturally have
> very little path overlap anyway (ie an advanced multicast topology might
> end up acting as a defacto multiple-unicast anyway). Just a thought.
Even if it's one server day one, we still need to map stuff to different talk
groups. Multicast makes this easy to accomplish and the program is the same
day one. Go is really aimed at scaleable multi-threaded programs and this
modular approach gives the ability to change small things without a massive
refactoring of code.
We're not married to Go either, so if you know C we can likely use the help.
--
Bryan Fields
727-409-1194 - Voice
http://bryanfields.net
More information about the P25nx-interest
mailing list