[App_rpt-users] ezstream 100% cpu

Loren Tedford lorentedford at gmail.com
Thu May 5 03:24:49 UTC 2016


I restarted asterisk with this command here.. So far we are backup and
running..

root at server:/etc/asterisk# cat restart.sh
#!/bin/bash
kill -9 $(pgrep ezstream)&
kill -9 pgrep ezstream
service asterisk stop
sleep 1s
killall asterisk
kill -9 $(pgrep ezstream)&
service asterisk stop
sleep 1s
service asterisk start
root at server:/etc/asterisk#

Below is my server at idle I do have other projects going on with the
server but I don't believe any of them would have affected ezstream.
http://kc9zhv.com/wp-content/uploads/2016/05/2016-05-04-1.png





Loren Tedford (KC9ZHV)
Email: lorentedford at gmail.com
http://www.lorentedford.com
http://www.kc9zhv.com
http://forum.kc9zhv.com
http://hub.kc9zhv.com
http://www.newwavesucks.com
http://forum.newwavesucks.com

On Wed, May 4, 2016 at 10:07 PM, Loren Tedford <lorentedford at gmail.com>
wrote:

> I don't think i am running out of disk space but looking now to check logs
> It did run the system log at least thats what i seem to see..
>
>
>
> May  4 21:20:01 server CRON[22247]: (root) CMD
> (/usr/local/sbin/./check_stream)
> May  4 21:30:01 server CRON[25488]: (root) CMD
> (/usr/local/sbin/./check_stream)
> May  4 21:40:01 server CRON[29164]: (root) CMD
> (/usr/local/sbin/./check_stream)
> May  4 21:50:01 server CRON[9769]: (root) CMD
> (/usr/local/sbin/./check_stream)
> May  4 22:00:01 server CRON[12927]: (root) CMD
> (/usr/local/sbin/./check_stream)
>
> Cron job seems to be running correctly and things seem to be pointed at
> the script not sure what i am missing..
>
> root at server:/etc/asterisk# lscpu
> Architecture:          x86_64
> CPU op-mode(s):        32-bit, 64-bit
> Byte Order:            Little Endian
> CPU(s):                8
> On-line CPU(s) list:   0-7
> Thread(s) per core:    2
> Core(s) per socket:    4
> Socket(s):             1
> NUMA node(s):          1
> Vendor ID:             GenuineIntel
> CPU family:            6
> Model:                 26
> Stepping:              5
> CPU MHz:               2800.000
> BogoMIPS:              5600.43
> Virtualisation:        VT-x
> L1d cache:             32K
> L1i cache:             32K
> L2 cache:              256K
> L3 cache:              8192K
> NUMA node0 CPU(s):     0-7
>
> root at server:/etc/asterisk# df -h
> Filesystem      Size  Used Avail Use% Mounted on
> udev            7.9G  4.0K  7.9G   1% /dev
> tmpfs           1.6G  1.2M  1.6G   1% /run
> /dev/md2         77G   11G   63G  15% /
> none            4.0K     0  4.0K   0% /sys/fs/cgroup
> none            5.0M     0  5.0M   0% /run/lock
> none            7.9G  4.1M  7.9G   1% /run/shm
> none            100M   16K  100M   1% /run/user
> /dev/md3        1.8T  208G  1.5T  13% /home
>
>
>
>
>
> Loren Tedford (KC9ZHV)
> Email: lorentedford at gmail.com
> http://www.lorentedford.com
> http://www.kc9zhv.com
> http://forum.kc9zhv.com
> http://hub.kc9zhv.com
> http://www.newwavesucks.com
> http://forum.newwavesucks.com
>
> On Wed, May 4, 2016 at 10:00 PM, Ken <ke2n at cs.com> wrote:
>
>> When the script runs you will find an indication in  /var/log/cron
>>
>> Lines like this:
>>
>> May  2 20:50:01 localhost crond[30559]: (root) CMD
>> (/usr/local/sbin/./check_stream)
>>
>>
>>
>> One thing that can cause a Linux system to grind to a near-halt is if you
>> have run out of disk space ….
>>
>>
>>
>> But I see you are running VBoxHeadless – I have no familiarity but a
>> quick Google of that finds something related
>>
>> https://forums.virtualbox.org/viewtopic.php?f=8&t=68525
>>
>>
>>
>>
>>
>> Ken
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *From:* Loren Tedford [mailto:lorentedford at gmail.com]
>> *Sent:* Wednesday, May 04, 2016 10:42 PM
>> *To:* Brent Weatherall <va3bfw at gmail.com>
>> *Cc:* Ken <ke2n at cs.com>; app_rpt mailing list <app_rpt-users at ohnosec.org>
>>
>> *Subject:* Re: [App_rpt-users] ezstream 100% cpu
>>
>>
>>
>> Well my server is at it again 100% cpu usage for no reason kinda odd..
>>
>>
>>
>> I did try Ken's script however I am not sure that its working the way it
>> should...
>>
>>
>>
>> I did place the info in crontab -e I wonder if we are missing some sort
>> of dependency that isn't getting installed.. Maybe their is an easier way
>> to broadcast to Broadcastify..
>>
>>
>>
>>
>>
>> Here is an image of what it looks like this evening..
>>
>>
>>
>> http://kc9zhv.com/wp-content/uploads/2016/05/2016-05-04.png
>>
>>
>>
>>
>> Loren Tedford (KC9ZHV)
>> Email: lorentedford at gmail.com
>>
>> http://www.lorentedford.com
>>
>> http://www.kc9zhv.com
>>
>> http://forum.kc9zhv.com
>>
>> http://hub.kc9zhv.com
>>
>> http://www.newwavesucks.com
>>
>> http://forum.newwavesucks.com
>>
>>
>>
>> On Mon, May 2, 2016 at 4:46 PM, Brent Weatherall <va3bfw at gmail.com>
>> wrote:
>>
>> Ken, definitely confirmed on my last CPU spiking that lame's process was
>> no longer running. I'll give your monitoring/restart script a try. Thanks
>> for the info!
>>
>>
>>
>> On Mon, May 2, 2016 at 11:26 AM Ken <ke2n at cs.com> wrote:
>>
>> Of course the 100% CPU is probably due to it waiting for some resource,
>> rather than actually being loaded to 100%. Press “1” in top and look for
>> the “%wa”
>>
>>
>>
>> I found (like some others) that the problem is actually the “lame”
>> program which vanishes for some reason.  I run a script every 10 minutes
>> that checks the pid for lame and restarts ezstream – but only if needed.  I
>> think that is better than killing ezstream when it is, in fact, running
>> fine.
>>
>>
>>
>> This is the script (it is not my invention)
>>
>>
>>
>> #!/bin/bash
>>
>> lamenumber=$(/sbin/pidof lame)
>>
>> if [ "$lamenumber" = "" ]
>>
>> then
>>
>>         eznumber=$(/sbin/pidof ezstream)
>>
>>         echo "$eznumber"
>>
>>         kill -9 "$eznumber"
>>
>>         echo "restarting"
>>
>>         date
>>
>> else
>>
>>         eznumber=""
>>
>> fi
>>
>>
>>
>> Regards
>>
>> Ken
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *From:* Brent Weatherall [mailto:va3bfw at gmail.com]
>> *Sent:* Monday, May 02, 2016 11:09 AM
>> *To:* Loren Tedford <lorentedford at gmail.com>
>> *Cc:* app_rpt mailing list <app_rpt-users at ohnosec.org>
>> *Subject:* Re: [App_rpt-users] ezstream 100% cpu
>>
>>
>>
>> A quick look at that cron schedule looks like it would run every minute
>> of the 0 hour and 12 hour. Did you just want it to run only once at 0 and
>> 12? If that is the case you'd want:
>>
>> 0 */12 * * *.
>>
>>
>>
>> I'll let you know how it works out for stability. Thanks again.
>>
>>
>>
>> On Mon, May 2, 2016 at 11:03 AM Loren Tedford <lorentedford at gmail.com>
>> wrote:
>>
>> The biggest issue with ezstream is that for some reason you get multiple
>> instances of the program running.. Not sure exactly what would be the best
>> solution to actually fix this but the cron script is a big bandage with
>> duck tape.. I really wish I knew how to write code..
>>
>> Loren Tedford (KC9ZHV)
>> Email: lorentedford at gmail.com
>>
>> Phone: 618-553-0806
>> Fax: 16185512755
>> http://www.lorentedford.com
>> http://kc9zhv.com
>>
>> Sent from Droid Turbo from Verizon wireless network
>>
>> On May 2, 2016 10:00 AM, "Brent Weatherall" <va3bfw at gmail.com> wrote:
>>
>> Thanks for the cron script to try Loren. Hopefully the daily restart of
>> ezstream will cure any issues.
>>
>>
>>
>> On Mon, Apr 25, 2016 at 3:57 PM Loren Tedford <lorentedford at gmail.com>
>> wrote:
>>
>> Here is what i did to get ezstream working I will be updating this forum
>> as well as to my fix for ezstream locking open randomly basicly it kinda
>> goes like this..
>>
>>
>>
>> http://forum.kc9zhv.com/index.php/topic,23.0.html
>>
>>
>>
>>
>>
>> crontab -e
>>
>> * */12 * * * sh /etc/asterisk/stopez.sh
>>
>>
>>
>>
>>
>> script that i use to reset ezstream..
>>
>> root at server:/etc/asterisk# cat stopez.sh
>>
>> #!/bin/bash
>>
>> kill -9 $(pgrep ezstream)&
>>
>> sleep 2s
>>
>> kill -9 pgrep ezstream
>>
>> sleep 2s
>>
>> /usr/sbin/asterisk -rx "module reload"
>>
>>
>>
>>
>>
>> now to restart asterisk i run this customized script.
>>
>> root at server:/etc/asterisk# cat restart.sh
>>
>> #!/bin/bash
>>
>> kill -9 $(pgrep ezstream)&
>>
>> kill -9 pgrep ezstream
>>
>> service asterisk stop
>>
>> sleep 1s
>>
>> killall asterisk
>>
>> kill -9 $(pgrep ezstream)&
>>
>> service asterisk stop
>>
>> sleep 1s
>>
>> service asterisk start
>>
>>
>>
>>
>>
>> So far I have had no issues with ezstream since i did this..
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Loren Tedford (KC9ZHV)
>> Email: lorentedford at gmail.com
>>
>> http://www.lorentedford.com
>>
>> http://www.kc9zhv.com
>>
>> http://forum.kc9zhv.com
>>
>> http://hub.kc9zhv.com
>>
>> http://www.newwavesucks.com
>>
>> http://forum.newwavesucks.com
>>
>>
>>
>> On Mon, Apr 25, 2016 at 12:10 PM, Brent Weatherall <va3bfw at gmail.com>
>> wrote:
>>
>> Hello,
>> I've recently setup an allstar hub node and decided to stream it via
>> broadcastify.
>>
>>
>> Setup has gone well - I'm running debian, with the allstar node running
>> without issue, as well as streaming now being delivered via Broadcastify.
>>
>>
>>
>> I've ran in to an issue where the ezstream process will occasionally pin
>> at 100%. I can kill it fine, and asterisk gracefully resumes by creating a
>> new working output stream.
>>
>>
>> Has anyone else encountered this cpu pinned usage? Will I have to set up
>> something to monitor the process to kill it, so it restarts?
>>
>>
>>
>> Thanks in advance for any suggestions
>>
>> VA3BFW - Brent
>>
>>
>>
>> _______________________________________________
>> App_rpt-users mailing list
>> App_rpt-users at ohnosec.org
>> http://ohnosec.org/cgi-bin/mailman/listinfo/app_rpt-users
>>
>> To unsubscribe from this list please visit
>> http://ohnosec.org/cgi-bin/mailman/listinfo/app_rpt-users and scroll
>> down to the bottom of the page. Enter your email address and press the
>> "Unsubscribe or edit options button"
>> You do not need a password to unsubscribe, you can do it via email
>> confirmation. If you have trouble unsubscribing, please send a message to
>> the list detailing the problem.
>>
>>
>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.keekles.org/pipermail/app_rpt-users/attachments/20160504/14a3ba95/attachment.html>


More information about the App_rpt-users mailing list