[P25nx] Welcome
Jamison Judge
jamison at jamisonjudge.com
Fri Jul 27 15:03:50 EDT 2018
Well, your design got a worldwide network up and running, and that’s FAR more than I can say :)
> On Jul 27, 2018, at 12:00 PM, David Krauss <nx4y at verizon.net> wrote:
>
>
>
> Sent from an iPhone 8
>
> On Jul 27, 2018, at 2:56 PM, Jamison Judge <jamison at jamisonjudge.com> wrote:
>
>>> Oh dear god, don't let cisco do multicast, they suck total ass doing it.
>>
>> THANK YOU…… Using cisco’s gre multipoint for this type of traffic is also a desperate plea for disaster.
>>
> I wasn’t aware I was making a desperate plea for disaster. I thought it was kinda slick , actually.
>
>>
>>> Our first work was doing programed IO at 9600 bps, and we just couldn't keep
>>> up
>>
>> Interesting. Out of curiosity, was this using Go? I’ve never tried high-level synchronous bit banging in practice, but until I read this, my gut would have said that rate would be no problem.
>>
>>
>>> What I propose is to have a core network of servers/VM's, 4 to 8 world wide.
>>> These run linux and have GRE tunnels between them with OSPF for unicast and
>>> MOSPF/PIM SM for multicast.
>>
>>
>> I think it’s worth weighing the cost/benefit of a multicast infrastructure at all. Is the promise of theoretical "infinite scalability” worth all the complexity that’s introduced? From a pragmatic, here-and-now standpoint, we have a relatively small number of spokes that may very well naturally have very little path overlap anyway (ie an advanced multicast topology might end up acting as a defacto multiple-unicast anyway). Just a thought.
>
> It solved the dynamic talkgroup problems too. New talkgroups can be built on the fly.
>>
>>
>>
>>> On Jul 26, 2018, at 12:59 AM, Bryan Fields <Bryan at bryanfields.net> wrote:
>>>
>>> On 7/26/18 12:47 AM, jy at xtra.co.nz wrote:
>>>> Hi Bryan,
>>>>
>>>> Thank you for breathing fresh life into this group.
>>>>
>>>> As you I'm sure you know the issue of jitter has rather plagued P25
>>>> transmission over the internet well before the advent of P25NX.
>>>>
>>>> I've spent some time looking at the jitter issue and at my urging Dave's
>>>> P25NX V1 dashboard did have a jitter meter of sorts to allow jitter to be
>>>> examined. FWIW my impression is that the audio from a Quantar locally
>>>> connected to a voter that in turn is receiving packets form the internet
>>>> via the original Cisco STUN process sounds better than the same stream
>>>> connected directly to the Quantar.
>>>>
>>>> One of the P25NX V2 goals was to move away from the STUN TCP style
>>>> connection to UDP but I think that is only half of the solution. Similarly
>>>> writing code to implement a jitter buffer for STUN based on the Motorola
>>>> V.24 transport has issues because there is not a one-to-one relationship
>>>> between the V.24 records and and STUN frames.
>>>
>>> Well that's the issue is having to use a cisco. It's a cheap off the shelf
>>> sync serial HDLC to IP converter. It's not designed for any of this, hell
>>> STUN isn't even a defined protocol.
>>>
>>> Our first work was doing programed IO at 9600 bps, and we just couldn't keep
>>> up, it needed a dedicated hardware. Luckily Juan Carlos XE1F, was able to
>>> code up a sync to async HDLC in a PIC16F887, and I got a prototype of this
>>> working. This converts the sync HDLC to a regular serial port at 9600 bps.
>>>
>>> This needs more testing.
>>>
>>>> A possible line of attack may involved taking the V.24 stream and then
>>>> packing the IMBE records one by one into RTP frames and using the Cisco
>>>> router's built-in RTP jitter buffer features. This addresses both UDP
>>>> transport and the necessary timestamps for successful de-jitter
>>>> processing.
>>>
>>> So we did get something like this going. It talks STUN to a cisco and
>>> completely rebuilds the frames that the Quantar expects. This can take a raw
>>> IMBE data, and publish it to the quantar and ensure the Reed Solomon FEC is
>>> setup properly, along with all the other flags and bits needed. We can do the
>>> same in reverse. This is all written in Go.
>>>
>>> RTP may be an option for this, but much of the protocol would need to be
>>> engineered for this application.
>>>
>>>> How might this work? There is no RTP payload type for Motorola V.24
>>>> encoded IMBE. AFAIK there is no RTP type for IMBE but there is a Cisco
>>>> support payload type of 'transparent' which operates with the assumption
>>>> that the codec is in the endpoint (i.e. the Quantar or DIU).
>>>>
>>>> So my proposal is that Motorola V.24 is converted to RTP with the IMBE
>>>> records packed 1:1 into 20ms RTP frames of codec type transparent then
>>>> transported by Cisco over the internet as RTP. The router's jitter buffer
>>>> will de-jitter the RTP packets based on the RTP timestamp. The de-jittered
>>>> frames then needed to be converted back to V.24 HDLC frames for the
>>>> connected Quantar.
>>>
>>> 20 ms frames is a horrible waste of bandwidth, and it hard on routers. A
>>> better idea was taking 180 or 360 ms of audio (9 or 18 frames of p25), adding
>>> the headers and handling to it and then sending that out. This is 100 or 200
>>> bytes of data, plus overhead, it's still under 512 bytes of the IPv4 minMTU.
>>>
>>> Worst case you would have 360 ms of delay through the network, and this really
>>> isn't an issue for hams. Now the cool thing is you can send two packets with
>>> sequence numbers and have no need for FEC. If one drops, you get the other,
>>> and 360 ms of audio with headers is about 400 bytes. This is 8kbit/s on the
>>> wire.
>>>
>>> If we looks at 20 ms frames, we have to pad them out to 64 bytes (minimum
>>> packet size), and have 50 of them per second, 64 bytes * 50 frames/s * 8
>>> bits/byte = 25.6kbit/s. And it's much more work to route/process 50 packets
>>> vs less than 3.
>>>
>>>> How might this work in practice? We connect the V.24 to the Cisco router
>>>> using the classic WIC-1 or similar interface. We have to accept that the
>>>> router only knowns how to encapsulate HDLC as STUN so this needs to be spat
>>>> out of an Ethernet port to a Raspberry Pi or similar (P25NX V2 style). The
>>>> Pi or other external box builds a properly formatted stream to send back to
>>>> the router on the same Ethernet port to the far end destination. The
>>>> topology could be an established RTP point-to-point connection to something
>>>> like an AstroTAC voter (or a PC emulation of that) or a point-to-multipoint
>>>> connection (Cisco uses the language 'hoot and holler' for this) via
>>>> multicast much as in V2.
>>>
>>> Oh dear god, don't let cisco do multicast, they suck total ass doing it.
>>>
>>> What I propose is to have a core network of servers/VM's, 4 to 8 world wide.
>>> These run linux and have GRE tunnels between them with OSPF for unicast and
>>> MOSPF/PIM SM for multicast. They are assumed to have a solid connections
>>> between them as they are at datacenters.
>>>
>>> Running on each of these Linux hosts facing the users is a unicast to
>>> multicast daemon. This will accept a connection from a repeater, auth it,
>>> negotiate it (1 packet or 2 for voice), keep state of talkgroups in use
>>> (pruning), handle forwarding/CAC if overloaded, and translations.
>>>
>>> Basically my repeater client will connect up, via UDP over IPv4/6 to the
>>> server, auth and negotiate if I'd like 1 or 2 copies of every frame and what
>>> talk groups I want. The server will convert my IPv4 unicast into IPv6
>>> multicast on the backend. Linux routing will forward these frames reliability
>>> and track loss at each hop between the servers.
>>>
>>> This makes the server side able to be robust and simple in it's process, with
>>> most heavy lifting done on the client. In testing this I was about to process
>>> about 20k pps across a test network of 6 RPi 3's emulation this "core". It
>>> would be easy to expand this network too, and using IPv6 multicast makes it
>>> very easy to map 65k talk groups into the IP space; much easier than IPv4.
>>> Since this removes multicast from the clients, it will even work behind NAT.
>>>
>>> The server side would fail over in the event of a server going down, you'd
>>> just connect to the backup. With a common interface, we could even write a
>>> translator from MMDVM to it, or even YSF which uses the same IMBE codec in
>>> wide mode (blasphemy, no?). This would have the distinct advantage over the
>>> the mmdvm reflectors that simply map a talk group to one server. If WW server
>>> is down, no one can talk. Granted its a bit more complex too.
>>>
>>> Much of the client is working and coded in Go. If you know Go and want to get
>>> involved let me know, I'm mostly on the protocol design side, coding is beyond
>>> me. We've all been busy with work/travel and the process is on the back burner.
>>>
>>>
>>> --
>>> Bryan Fields
>>>
>>> 727-409-1194 - Voice
>>> http://bryanfields.net
>>> _______________________________________________
>>> P25nx-interest mailing list
>>> P25nx-interest at lists.keekles.org
>>> http://lists.keekles.org/cgi-bin/mailman/listinfo/p25nx-interest
>>
>> _______________________________________________
>> P25nx-interest mailing list
>> P25nx-interest at lists.keekles.org
>> http://lists.keekles.org/cgi-bin/mailman/listinfo/p25nx-interest
>
> _______________________________________________
> P25nx-interest mailing list
> P25nx-interest at lists.keekles.org
> http://lists.keekles.org/cgi-bin/mailman/listinfo/p25nx-interest
More information about the P25nx-interest
mailing list