Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
You are probably using a debian chroot environment, since the
uptime binary is not included in the stock firmware.
The uptime data is kept in the file /proc/uptime. The first number is (on Unix systems)
the number of seconds since the system reboot, and the second number is the number of
seconds the system processor is idle
for my DNS-323 this is
#cat /proc/uptime
1169682368.36 54908.72
The first number seem to be the current system time in seconds from the epoch (Jan. 01, 1970)
The second number could be the number of seconds since reboot, or could be idle processor time.
I wrote a shell script to read these values
#!/bin/sh # formatted output function print_time() { t_sec=$1 # echo $t_sec # constants min=60 hr=`expr $min \* 60` day=`expr $hr \* 24` # comupte days, hrs, min t_days=`expr $t_sec / $day` t_hrs=`expr \( $t_sec \% $day \) / $hr` t_min=`expr \( $t_sec \% $hr \) / $min` # output echo -n : $t_days days", " echo -n $t_hrs hrs", " echo $t_min min } # end of formatted output function UPTIME=/proc/uptime # echo $UPTIME if [ ! -f $UPTIME ] then exit 1 fi # read uptime read up_sec idle_sec < $UPTIME #output uptime if [ -n "$up_sec" ] then echo -n uptime print_time $up_sec fi # output idle time if [ -n "$idle_sec" ] then echo -n idle_time print_time $idle_sec fi
for the preceding values of /proc/uptime the script output is:
uptime: 13537 days, 23 hrs, 46 min
idle_time: 0 days, 15 hrs, 15 min
I rebooted the system about Wed Jan 24 08:30:29 and I ran the script at Wed Jan 24 23:46:07
the DNS-323 was idle all day while I was at work ~15 hrs between boot and running script, so I'm
not sure if the second number is idle or uptime.
The first number (without the decimal digits) gives Wed, 24 Jan 2007 23:46:08 when entered in
http://www.onlineconversion.com/unix_time.htm this is ~the time I executed the cat /proc/uptime command
I think the kernel is writing the wrong values into /proc/uptime.
//Mig
Offline
Hello,
First I would like to thank all active members of this forum, I have used it as my DNS323-resource for quite some time but never had the time to sit down and really work with the unit until now on my vacation, therefore my late registration...
Now my first question, did anyone figure out what the second number in /proc/uptime, idle or time since last boot? I would like to have a proper way of displaying my boxes uptime... is the recommended way of using the above script or any other means?
over n out
Offline
i have compiled uprecords
http://upit.jtw.dk/dl/8r9/
you need to change the
SBINDIR=${FUNPLUGDIR}/bin
to
SBINDIR=${FUNPLUGDIR}/sbin
in fonz latest fun_plug
Offline
sweet, thanks!
for everyone else trying this, two things could be mentioned as well
1) One more modification must be done to fonz fun_plug 0.3, add the line
export SBINDIR
as well as the modification menioned above:
SBINDIR=${FUNPLUGDIR}/sbin
2) To run uptimed, type uprecords after reboot
/mnt/HD_a2 # uprecords
# Uptime | System Boot up
----------------------------+---------------------------------------------------
-> 1 0 days, 00:05:42 | Linux 2.6.12.6-arm1 Tue Jul 24 10:05:38 2007
----------------------------+---------------------------------------------------
seems to work like a charm
Offline
KRH: You wouldn't by any chance have uptimed compiled for ffp 0.5?
thanks
Offline
Looks like this was written for ffp, and an older version. I am using a chroot debian install and I'd like to have the correct uptime. Can someone point me in the right direction to get this working?
Offline
same question as kennedy101
Offline
Try this. it's a shell script funplug drop-in - no compiling necessary.
It sets an alias for uptime so if it doesn't work first time, then either read the README (gasp) or log off the NAS and back on again to source your profile. It also assumes that you have fonz's bash shell installed.
Offline
Works GREAT - Thanks!! Just needed to modify /ffp/bin/bash to /bin/bash and all was good.
Offline
with uptime-0.1-1.tgz, I got some strange readout under chroot debian lenny:
11:19:38 up 0 days, 14:08, 2 users, load average: 2.05, 2.76, 2.83
11:27:23 up 0 days, 14:09, 2 users, load average: 1.99, 2.18, 2.49
11:31:20 up 0 days, 14:09, 2 users, load average: 3.15, 2.42, 2.49
result of cat /proc/uptime around 11:26 is:
1232969208.78 50947.06
I am pretty sure is 1st number the current time, but have no idea how the 2nd number is being updated/used.
Last edited by toolbox (2009-01-26 22:29:42)
Offline
toolbox wrote:
with uptime-0.1-1.tgz, I got some strange readout under chroot debian lenny:
11:19:38 up 0 days, 14:08, 2 users, load average: 2.05, 2.76, 2.83
11:27:23 up 0 days, 14:09, 2 users, load average: 1.99, 2.18, 2.49
11:31:20 up 0 days, 14:09, 2 users, load average: 3.15, 2.42, 2.49
result of cat /proc/uptime around 11:26 is:
1232969208.78 50947.06
I am pretty sure is 1st number the current time, but have no idea how the 2nd number is being updated/used.
The problem "here" on the DNS-323 is that the values in /proc/uptime contain the right values but in the "wrong" order. The shell script takes a guess as to if the order is reversed or not and formats the values accordingly. The values (should) represent uptime and idle time.
You say you got strange output, can you say what it is that you were expecting?
Offline
Toolbox, if I understood your post, the output from uptime.sh was wrong.
you got this:
13:25:21 up 0 days, 14:11, 2 users, load average: 2.49, 2.65, 2.42
What's wrong in the above? What should it have been?
Offline
I rebooted my DNS-323... From what I can observed, the two numbers from /proc/uptime is the current time and idle time, respectively.
Right after I rebooted the unit, the uptime is incremented every minute. However, after 1 hr after reboot, the "up time" from uptime.sh is 0 days, 00:47.
After 4 and a half hours, the uptime is 4:06. During these 4.5 hrs, rtorrent was running with some heavy hash checks.
Last edited by toolbox (2009-01-27 06:53:03)
Offline
Here's a detailed thread I just started on the kernel bug that causes these issues: http://dns323.kood.org/forum/viewtopic.php?id=4006.
No real fix without a custom kernel; but I think you could adapt the scripts here to get the initial boot time by subtracting values from the cpu0 line in /proc/stat from the initial sampled uptime. It wouldn't make much difference but for purests it should help compensate for the missed time before fun_plug kicks in.
-Jeff
Offline
I installed uptime-0.1-1.tgz... then rebooted the DNS... waited about 30mins then ran uptime and got the following output....
/ffp/bin/uptime.sh: line 70: users: command not found
11:21:47 up 0 days, 00:00, 0 users, load average: 5.46, 5.19, 4.05
Any ideas??
Offline
capone wrote:
I installed uptime-0.1-1.tgz... then rebooted the DNS... waited about 30mins then ran uptime and got the following output....
/ffp/bin/uptime.sh: line 70: users: command not found
11:21:47 up 0 days, 00:00, 0 users, load average: 5.46, 5.19, 4.05
Any ideas??
cd /mnt/HD_a2/funpkg/ wget http://www.inreto.de/dns323/fun-plug/0.5/packages/coreutils-6.12-1.tgz funpkg -i ./coreutils-6.12-1.tgz
Offline
capone wrote:
11:21:47 up 0 days, 00:00, 0 users, load average: 5.46, 5.19, 4.05
That's pretty heavy load info.
FWIW - I installed uptime on 2 NAS boxes. Even though both were rebooted at the same time, and the 'info' page shows the correct uptime, uptime.sh differed by 5 days. There is something screwy in there somewhere.
Rather than replace 'uptime' and recalcualte the values, my script queries the admin info page and pulls the values from there. I run it hourly and pipe it to a log so I can monitor temp and uptime over a prolonged period:
Sat, 19 Mar 2011 21:00:09 EDT NAS40 Size:0915GB Free:0493GB [53%] Temp:33°C Up:33 days 6 hours 15 minutes Sat, 19 Mar 2011 21:00:09 EDT NAS41 Size:2748GB Free:1187GB [43%] Temp:31°C Up:33 days 6 hours 15 minutes Sat, 19 Mar 2011 22:00:08 EDT NAS40 Size:0915GB Free:0493GB [53%] Temp:33°C Up:33 days 7 hours 15 minutes Sat, 19 Mar 2011 22:00:08 EDT NAS41 Size:2748GB Free:1187GB [43%] Temp:31°C Up:33 days 7 hours 15 minutes Sat, 19 Mar 2011 23:00:07 EDT NAS40 Size:0915GB Free:0493GB [53%] Temp:33°C Up:33 days 8 hours 15 minutes Sat, 19 Mar 2011 23:00:07 EDT NAS41 Size:2748GB Free:1187GB [43%] Temp:33°C Up:33 days 8 hours 15 minutes
Last edited by FunFiler (2011-03-20 05:19:18)
Offline
Hi!
This is how I solved it on my DNS-313 running Debian Lenny:
(Linux DNS-313 2.6.15 #79 Thu Jul 30 10:38:31 EDT 2009 armv4l GNU/Linux)
I created below script and put it under my home directory (/home/testuser) and made it executable:
#!/bin/bash # # uptime for DNS-313 Debian 5 (Lenny) - # 05122013 - created by Nagy Péter # # values # read out uptime values from `cat /proc/uptime` then swap them... days_hours=`cat /proc/uptime | awk '{print $2}' | awk '{printf("%d days, %02d:%02d:%02d", int($0/86400), int(($0%86400)/3600), int(($0%3600)/60), int($0%60))}' | strings` w_0=`w` w_1=`echo $w_0| head -1 | awk -F" " '{print $1,$2}'` w_2=`echo $w_0| head -1 | awk -F" " '{print $6,$7,$8,$9,$10,$11,$12}'` echo "$w_1 $days_hours, $w_2"
then I renamed the old uptime binary in /usr/bin so that it won't be used unattendedly, and created a symlink as new "uptime" for my script
mv /usr/bin/uptime /usr/bin/uptimeold
ln -s /home/testuser/uptime.sh /usr/bin/uptime
You can see below that it works! uptimeold is the old deprecated uptime, while uptime is the new one...
root@DNS-313:~/testuser# uptimeold
11:25:36 up 16044 days, 10:25, 1 user, load average: 0.06, 0.15, 0.17
root@DNS-313:~/testuser# uptime
11:25:42 up 0 days, 02:05:14, 1 user, load average: 0.14, 0.17, 0.17
Even Webmin shows normal uptimes after implementing this script ...
Best Regards,
aFoP
Last edited by aFoP (2013-12-05 13:01:56)
Offline