[App_rpt-users] DIAL 8.5 and VM's on ProxMox
Will Bashlor
will at bashlor.com
Tue Nov 28 18:40:57 UTC 2017
Hi Benjamin,
I'm not sure of the specifics in your environment of course, but if there are multiple live versions of a virtual machine then the redundancy/failover features are typically within the guest virtual machine or the specific services it offers. Such as DNS Primary and Secondary servers, for example.
In other cases in a virtual environment where the redundancy/failover features are within the hypervisor, whether it be Proxmox, ESXi, HyperV, etc, the key is shared storage. In the event of a critical failure of an individual host (individual server or server blade in a chassis), the hypervisor detects the failure and automatically moves the single instance of the running virtual machine to another host, with minimal downtime.
Running virtual machines can also be manually or automatically migrated from one host to another to more equally distribute the load across the cluster, with virtually (haha) zero downtime. I've ping virtual machines continuously while they are being migrated and never lose a ping!
More info:
https://en.wikipedia.org/wiki/Hypervisor
Maybe this explanation helps someone...
73
Will, KE4IAJ
TARG AEC
-----Original Message-----
From: App_rpt-users [mailto:app_rpt-users-bounces at lists.allstarlink.org] On Behalf Of David McGough
Sent: Monday, November 27, 2017 10:07 PM
To: Users of Asterisk app_rpt <app_rpt-users at lists.allstarlink.org>
Subject: Re: [App_rpt-users] DIAL 8.5 and VM's on ProxMox
To REALLY tell how well any environment is working, you need to check the timing quality as reported from the dahdi kernel drivers. Many environments (particularly VPS!) do rather poorly in the area. Poor results typically mean audio choppiness and poor telemetry timing (e.g.:
Bad CW or tone timing), particularly where the server is used as a hub with many users connecting, needing to mix many audio streams. Note that this is an asterisk thing, not specifically AllStar. Many messages have been written about this in other asterisk related forums; goog'ling will find many results.
To test the timing quality, use the dahdi_test command. Jitter in the timing results and accuracy less than about 99.8% means less than perfect performance and potentially mediocre results.
Here is a sample run from my dev RPi3 system with 3 nodes (2 usb audio, 1
pseudo) active:
[root at alarmpi-kb4fxc asterisk]# dahdi_test -c 100 Opened pseudo dahdi interface, measuring accuracy...
99.992% 99.990% 99.994% 99.994% 99.995% 99.996% 99.994% 99.994% 99.995% 99.993% 99.996% 99.994% 99.994% 99.994% 99.995% 99.996% 99.993% 99.993% 99.994% 99.995% 99.995% 99.994% 99.994% 99.994% 99.994% 99.995% 99.993% 99.994% 99.994% 99.994% 99.995% 99.993% 99.994% 99.993% 99.994% 99.995% 99.993% 99.994% 99.994% 99.994% 99.996% 99.993% 99.994% 99.994% 99.995% 99.996% 99.994% 99.994% 99.994% 99.994% 99.996% 99.994% 99.994% 99.993% 99.994% 99.995% 99.995% 99.993% 99.994% 99.995% 99.995% 99.995% 99.994% 99.994% 99.994% 99.994% 99.994% 99.994% 99.994% 99.995% 99.995% 99.994% 99.994% 99.993% 99.994% 99.996% 99.993% 99.995% 99.994% 99.995% 99.996% 99.993% 99.994% 99.994% 99.995% 99.996% 99.994% 99.994% 99.994% 99.994% 99.996% 99.994% 99.994% 99.995% 99.994% 99.996% 99.994% 99.993%
--- Results after 98 passes ---
Best: 99.996% -- Worst: 99.990% -- Average: 99.994247% Cummulative Accuracy (not per pass): 99.994
73, David KB4FXC
On Mon, 27 Nov 2017, Benjamin Naber wrote:
> For those of you who are into this sort of thing, DIAL 8.5 has been
installed, conbooberated and running successfully with no apparent lag, on the latest stable version of Proxmox. Currently, it is a radio-less node.
The test environment is a cluster of three Dell R310 servers (nodes), each with 16GB+ RAM, RAID 1 system drives, some other volume drives, 10GB fiber storage network links, and 1GB network connections.
Each of three nodes have other VMs running on them, running stuff like BIONIC, and other silly things for 'stress' testing, as the system is being evaluated for production environment.
To my understanding, proxmox is not a load-sharing/proxy/cloud computing network, each VM is hosted/homed on a single node, but has "live" versions on the other nodes in the cluster. Should a node suffer both power supply failures, or CPU fan squeals to a stop, or the RAID controller dies, within a minute or so, another node will spin up the live versions of the VMs that were on the now dead node.
So far, that has not been any noticeable lag, jitters, delay, otherwise anything negative, much to my surprise. This is a 101 for me on VM stuff, never messed with it until now.
If anyone wants to assist in testing, you are invited to connect to
29567
Tuesday night, 7PM Central/8PM Eastern for our weekly Allstar Technical Net.
I am calling the net tomorrow night, to which the topics are advanced ASL node configurations, and some other stuff I have to be reminded of.
There is chatter throughout the days, more-so at night, so anyone is welcome to connect anytime!
Don't be a square, connect to there!
~Benjamin, KB9LFZ
_______________________________________________
App_rpt-users mailing list
App_rpt-users at lists.allstarlink.org
http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users
To unsubscribe from this list please visit http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users and scroll down to the bottom of the page. Enter your email address and press the "Unsubscribe or edit options button"
You do not need a password to unsubscribe, you can do it via email confirmation. If you have trouble unsubscribing, please send a message to the list detailing the problem.
_______________________________________________
App_rpt-users mailing list
App_rpt-users at lists.allstarlink.org
http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users
To unsubscribe from this list please visit http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users and scroll down to the bottom of the page. Enter your email address and press the "Unsubscribe or edit options button"
You do not need a password to unsubscribe, you can do it via email confirmation. If you have trouble unsubscribing, please send a message to the list detailing the problem.
More information about the App_rpt-users
mailing list