Unfortunately no one can be told what fun_plug is - you have to see it for yourself.
You are not logged in.
Hi, I just found out that I cannot create sub-directories under a directory that I created. I tried both through windows access (samba) \\dns-323\home and also thru telnet.
After I telnet and cd to that directory and try to create a sub-directory, I get the following message:
# mkdir: Cannot create directory 'abc': Input/output error
Permission is set to "drwxrwxrwx" on all. Interestingly, I cannot create a file in that directory:
## cat > abc.txt
What's going on?
Thanks,
kilo
Offline
A clarification: I meant I can create a file but not a directory.
So, I can create a file but not a directory. What could be going on? I am going nuts.
Recently, I had posted another issue about not being able to delete old install of lnx_bin (part of fun_plug install). Somehow, busybox3 has become a recursive directory that is just too deep to be deleted. I renamed lnx_bin to lnx_bin_old and installed a new lnx_bin. The new install is fine. But I cannot delete old install lnx_bin_old $%&^&&
I am going nuts. I will appreciate any help!
Thanks.
Offline
I think I might ha ve to run e2fsck on my disk. Can some kind soul help me with this? I tried the following:
# # e2fsck /dev/md0
I got the following message:
e2fsck 1.32 (09-Nov-2002)
The filesystem size (according to the superblock) is 121963440 blocks
The physical size of the device is 121636128 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? yes
What does it mean? What should I do?
Thanks.
Offline
I had a similar problem - not with creating directories, but deleting directories and files. It turned out these were file system errors, and running e2fsck fixed it. However, it can be dangerous to run e2fsck on mounted disks - and there's the problem. You can't unmount the disks if your telnet server runs off that disk... I haven't even tried moving telnet server and e2fsprogs to ramdisk and instead created an initramfs, booted it, ran e2fsck, done.
Offline
Thanks. I have the "delete" problem as well.
I don't know linux well - so I couldn't follow your recipe. Could you please explain again how you ran e2fsck, with your command line options? I am using two 500GB disks in RAID1 mode.
I appreciate your help. Thanks.
Offline
I can upload the needed files tonight, but I can't help you with the RAID setup - never did that myself, and I'm running "two seperate disks" mode. I think, you should definitely check disks with RAID enabled or you risk inconsitencies. Maybe someone else with RAID experience can help out here.
Offline
Offline
Hi All,
After I fsck my disk, everything is fine. Since I don't really need the level of robustness offered by RAID1, I don't use it. Instead, I occasionally backup some directories accross disk using rsync
I would like to acknowledge fonz for his help (fsck and fun_plug with rsync). Thank you very much.
Offline
@fonz,
I think the e2fsck is still installed on the DNS323. There's no need to use an "external" version.
# find / -name e2fsck
/sys/crfs/bin/e2fsck
/usr/bin/e2fsck
best regrads...keule
Offline
I am having the same problem while deleting the old mldonkey directory. Checking disks with e2fsck right now. Hope it will fix the problem.
Offline
Fonz, thank you for the files and instructions. On the note about the RAID set, you need to have mdadm installed in order to create/recognize the existing RAID configuration.
The following should work:
mdadm --assemble /dev/md0 /dev/sda2 /dev/sdb2
This should provide you with the device md0 which you can then invoke your e2fsck operation on. To get details on the md0 device, one can issue
mdadm --detail /dev/md0
It would be nice to have the latest (stable) "mdadm" included in the fonz_plug_in's as it would allow "grow" (resize) along with having "resize2fs". This would allow one to be able to migrate from their current drive to a larger drive with little issue, although mileage may vary. I tried this going from 400GB to 1TB, but because my 400 had fs issues, the migration was less than desireable. I was able to workaround issues by just running a rsync from the source to the target; had to pull the 400 to an external server (thanks to ubuntu! livecd), mount the 400 there and start clean on the dns323 with a new 1TB raid set. I do suspect that if my original source fs was clean, the initial migration would have gone smoother.
Just fyi for others, I'm now running on 2x ST31000340NS drives in RAID1 config.
tip: when playing with mdadm, be sure to stop the md0 device after umounting your fs before shutting down
Offline