In the attached picture you’ll be able to see what’s happening to one of my most important servers containing some irreplaceable climate data. I’m at a loss and understand what’s going on because Linux is not my specialty and my Linux expert has since moved on to other ventures. It’s very important I get this server working again, so I’m asking the WUWT community for help.
The server had been offline for a few weeks, and was properly shut down. Upon powering it back up, I got a “no bootable disk found” message. I determined the RAID (Hardware RAID1 – 2 mirrored drives) had been degraded, and it seemed one disk had failed. So I purchased two new identical HD’s cloned the good one, and rebuilt the RAID1. The RAID is administered by an on-board Adaptec RAID controller, and it reports the RAID as healthy.
What happens now is that it attempts to boot, but gets stuck in a loop on the last messages “Init ID c1, c2…etc” and repeats those error messages. I get the same partial boot and error sequence if I take out the RAID in BIOS, and try booting a single drive in straight SATA mode.
This machine was built circa 2007, and has Slackware Linux of that era installed, I don’t see a version number coming up on boot, so can’t provide it.
Any and all help appreciated. – Anthony

Okay. Here it is the next day. How’s it going?? Did Anthony accept Linux expert E.M. Smith’s excellent, generous, offer to drive up (only ~3 hours)? As far as we readers know, Anthony just blew off Mr. Smith!
Well, just hope it is all okay, now.
I’m on standby. A local guy was going to take a look so no need for me to drive just yet. System was set aside for a few days while Anthony did other things anyway. I’m supposed to “check in” today, so reading here first to ‘catch up’ then will hit email for any update there.
I’ve got my kit ready to load into the car and the spouse expects me to be out of town tomorrow if need be.
So no, Anthony did not “blow me off”, but contacted me via private channel.
De-lurk. Your system, which looks like CentOS at a glance, is trying to start X (graphics mode) after runlevel 3. GUI should be RL 5. Something’s either misconfigured or missing. One fix is to set level 3 as init-default in /etc/inittab.
id:3:initdefault: – change to this
#id:5:initdefault: – from this
This will disable boot-to-GUI.
If you want a GUI, install xorg, gdm and gnome. Use “yum install” for xorg, gdm and gnome.
Thanks, that would be great if I could understand any of it. One of the downsides of Linux is that experts in it often speak in tongues…at least that’s how it seems to mere mortals. 😉
Hopefully you are taking EM Smith’s advice. Yes, Lunix can be arcane, but modern distros like Cinnamon Mint are sufficiently Windows-like for mere mortals.
Definitely build a new boxen, though if you are on a budget, manufacturer refurbs are the way to go. I recommend Dell.
Anthony, it seems you have plenty of competent people willing and able to help but as I suggested above , I think you need to define exactly where you are with this and what the perceived problems and constraints are.
That will ensure that you specific help , not the whole kitchen sink of everything anyone knows about linux command line.
The first step, suggested by many is to get a bootable “live” DVD, start the PC off that and see what state your new copy of the file system is on the RAID. Just about any linux distro’s live DVD will do for that.
I suggested a number of things you may wish to clarify to enable people to give specific, pertinent advice, I won’t repeat them here. Depending upon what is required you may not need to get involved in archane and error prone command line hacking.
This seems to be Slackware 10 or 11 from the two package versions we are able to see on the screenshot. That means it is running a 2.4.x kernel . That may be relevant if you need to reinstall something similar to get the backup software running. There are major differences from 2.4 to 2.6 ( which is way out of date itself now ) so anything you have intended to run on that machine is very unlike to run on anything remotely recent.
see PACKAGES.TXT for the various version here:
http://ftp5.gwdg.de/pub/linux/slackware/
Again, define what you need to do why you need to do it and someone will come up with a solution.
Maybe start with :
is the only copy of the “backup” data on the RAID devices or do you have a physical backup elsewhere?
what is the backup software that was used?
“anything you have intended to run on that machine is very unlike to run on anything remotely recent”
Not a showstopper. Install recent Linux distro on new machine. Install relevant Slackware distro in a virtual machine. Proceed from there. Similar to what I did to get Corel Word Perfect for Linux running recently on Cinnamon Mint 17.3. ISOs for old versions of Slackware are readily available for DL.
Well if you knew nothing about mechanics and put out a call for an expert to advise on how get your engine to start, you would probably say the same thing about the advice you got.
That is way it appears and is why the fs integrity needs to be checked as a first step. If it can’t spawn a tty you are not going to get runlevel 3 or an emergency console either !! So far that root fs looks stuffed though it may be repairable or at least partly recoverable for the data.
I am guessing that the “backup” of the data is also on that drive and may or may not be recoverable.
“Michael Palmer March 22, 2017 at 4:16 pm
If you have a copy of the program, I would suggest trying to get it to run on a recent Linux installation. If it runs, then you can install a newer Linux version on your server, rather than fixing the old one.”
Actually… Get a new server. Install Linux, or boot from a temporary Linux disk. See if the old software runs on it. See if you can access the old disk drives (actually, the copies). If so, then install Linux on the new box and try using that.
Do not throw away the old server until you know that everything is working. Someone else suggested something similar, but their sequence was to toss the old server first.
By the way… on the old server, consider replacing the keyboard. Maybe someone spilled something on it, and the keyboard might be flooding the console process and making it die.
As others here have observed, old hardware dies. Electrolytic capacitors dry out, magnetic domains on disks fade… One machine I had in my early days had what looked like a failure of the video adapter, but it wasn’t. The serial card was faulty. So it goes…
I am going on the assumption that the disk is OK and something changed. So I would check the BIOS and make sure the serial port is enabled. I no longer have any bare metal Linux servers, so I can’t test it. But that is the first thing I would do.
This is very old software. The LVM version included was released in November 2003.
Good Luck
Hi,
It looks to be having problems with serial ports that aren’t live. Open /etc/inittab and look for the corresponding lines that match the message. Should be something like:
c3:12345:respawn:/sbin/agetty 38400 tty3 linux
From your display, you should have 6 of them.
You need to place a # before each of those lines, save the file, then reboot.
This should solve the issue.
Best of luck.
Anthony: If I understand your problem correct: you have backup of your data so we may leave that for the moment. You have an old PC, at least about 10 years old, with sufficient resources to run Slackware as a server OS. A new version of that distro may be loaded down from their homepage according to distrowatch.com if need may be.
Most Linux distros may be able to set up software RAID out of the box, It may be comparable in speed with your hw controller. Mdadm may be installed from the package administration. Possibly you may find a linux driver for your hw controller too, if you keep it but install another distro as your server.
I have never used Slackware, didn’t find it interesting, though I have used various flavours of Linux since I started with SUSE (now OpenSUSE) when it was delivered on some 3.5 “ floppies. If I were you, I would have downloaded Ubuntu (Debian based) with the preferred desktop and used it as a basis for your server. Burn the downloaded ISO to a DVD or a flash drive, and boot up the PC from DVD or an USB port. Then you can check out if all your hardware works from the live DVD before you install it as a desktop or server on your PC. After that you may start up your selected raid 1 for mirroring the disks.
You may keep the desktop running after finishing the install, but it should be easy to shut it down and start the OS in server mode. I installed Ubuntu for a friend as a server with wordpress. He preferred to have the desktop running to be able to handle it, but when he has finished installing a large photo gallery on wordpress, he may go for a clean server.
To your raid solution: Throw it into the dustbin and install an ssd disk if the size and price is in your comfort zone. It may these days last as long as a HD solution, with a tremendous speed increase. No moving parts. Less power used.
That was my thoughts about your problem. This is written on a ASUS ZenBook UX 305 with an ssd disk, running Linux Mint 18, with use of LibreOffice Writer from where I copied it to your wordpress message window.
Good luck with your server.
Greetings from The Land of The Midnight Sun (North Norway)
For SSD fans:
Do note that despite “no moving parts” there is “cell wear” from write operations. The SSD in the Mac from which I am posting this died after about 5 years of use (data recoverable both from cloud and local TB disk). I’m presently posting using an external SD card for file system and OS (as we bought a new Mac for the spouse and I inherited this “broken” system). On my “someday” list is to install a new SSD (much faster than the SD card, that is Bog Slow…)
So expect your SSD to be good for “a few years” but also expect that heavy read / write can cause it to fail catastrophically and much sooner than a rarely used hard disk…
Thanks for this. I’ve got SSD on a 5-year old Mac (backed up six ways from Sunday) but I was unaware of the relative weakness of SSD vs HD. Good to know!! (Just when I’ve got the proper grooves worn in the keys, the damn thing is gonna fail on me, I know it…)
I still don’t understand the exact problem because Anthony has been a bit vague. It looks like getty can’t start because it can’t connect to the virtual consoles. In other words, no login prompt. Commenting out lines in inittab will still mean no login prompt.
But is that desired anyway? What hardware is this? Does the program you need run off the network or do you login with a gui?
Tell us what it used to do and we can probably help get it back to that state.
Although as already said, things that are ten years old and broken are very hard to fix. It would probably be easier to reinstall from scratch.
Before worrying about why tty’s aren’t there, the first task is to see whether the root fs is corrupted or not.
The “exact problem” can’t be known until it is fixed. Until then, it is just speculation based on diagnostics.
From what is known, there are several potential causes, from hardware to bios to configuration and on up to RAID failures on the /boot /root areas.
I’d approach it by bypassing as many of those as possible with new gear / OS and then recover the data; after that, work backward to find the “exact problem”… if terminally curious…
Time for ZFS, Mr. Watts. 🙂
So, Anthony were you able to get the problem fixed? As I said near the top of the post. I suffered through something similar last year. I had an older IBM xSeries server running as a simple Linux file share box. I didn’t know there was any thing critical on it until someone in marketing said they couldn’t get to their data. It was old data, and I told the users that the box wasn’t backed up and was slated to be retired. I was lucky in that I could boot from USB and pull the data off. The punctured RAID array came about because I didn’t update the firmware for over 5 years.
Lesson: keep the patches and firmware microcode up to date. The value of WhatsUp lies not in the hardware, software, etc… but in the content. You may wish to reevaluate things and consider moving your critical data to the Cloud. If you still prefer Linux because of the low cost, there are plenty of Cloud providers that allow you to spin-up Linux distros, and they charge per subscription. When you have data sitting on 10 year old unpatched hardware you’re just asking for trouble.
Good luck.
No, not yet.
can you post a followup with the fixes, root issue, etc?
would be good to learn from it as i am interested.
Anthony, EM Smith offered upthread to drive over and help you fix it.
BTW we should take care not to share any details that could help a potential hacker.
For what it’s worth Anthony, given you have lots of advice already but I take it you haven’t resolved this yet, the first option is to get someone who knows what they’re doing in front of you and the computer – so take up EM Smith on his offer.
Failing that, I take it that this is mostly about recovering that machine to an operable state, because otherwise you have to rebuild that machine to use the software. And as we all know, that can be a lot of time/work.
So, I’d work through the likely causes:
Firstly, is it a corruption on the drive. You say you made a copy, but that may have been a copy of something that was corrupt in some way. So boot from a recovery system (CD or USB drive – Knoppix, RescueCD will work, but I see there’s a liveslak which probably is your best bet: https://docs.slackware.com/slackware:liveslak), and then see if it will mount the disk. If it does, you can rummage around to see whether it all looks right, and you can run a chkdisk to see if linux thinks it is correct. If it doesn’t, you’re into recovering that disk/filesystem, which is a more error prone process.
IF the hard drive seems sound (you can mount it and the files all look to be there) you can do a CHROOT to try to run from it without running a full boot process. http://docs.slackware.com/howtos:slackware_admin:how_to_chroot_from_media
No doubt having someone like FE Smith there to do this for you would be much better. Depending on what you find, you can then decide whether you’re better to fix that old disk to make it bootable again (maybe a couple of key boot files are corrupt, replace them and you’re off), or just make a new install and bring the data and config across.
IF the data is not good, then recover from backup and reinstall the computer.
IF the data is good, but it can’t be made to boot, then install a new OS on a clean disk – slackware if you wish – and then copy the data into it and reconfigure the software you need as appropriate. Or bite the bullet and put this onto another machine you already have – so you have one fewer machines to maintain.
As noted in a comment a bit above:
Anthony has taken me up on my offer, but was “otherwise occupied” until today. I’m now checking in to find out current status and prep.
Linux starts with a loader utility called GRUB which is prior to the screen-shot at the top. This loads a kernel image and minimal root file system compressed as single file with a name starting initrd…. , that much seems to work. It is that initrd which is showing LVM etc.
It then loads ( mount ) the real root fs in read only mode, that mount process works but does not mean it is not corrupted. The kernel will then attempt to switch from the initrd minimal fs in memory to the real fs on the disk. This is where it falls over. The root fs is corrupted.
Anthony, you have expressed above that you have little to no knowledge of linux commands so how was this “clone” done? Using windoze ?
Is that correct way to replace a failed RAID1 disk? I think you should have just added the new virgin disk and used the Adaptec firmware to rebuild the array.
If you have one byte for byte accurate clone of the working drive ( which you wisely are not using ) then put that in, disable RAID and it may boot. To restore RAID you should let the firmware handle it.
Hi,
Just to add my voice to the confusion. I held down a job supporting a one billion dollar embedded software project running linux for about 15 years. I did the kernel drivers and os system software.
I would say that the fundamental problem is new hardware with old software. You have upgraded the controllers but used the old software .
So I would suggest getting a late version distribution and trying to mount the disks with that.
good luck,
I thought quotes would work. The middle paragraph should be:
I would say that the fundamental problem is new hardware with old software. You have upgraded the controllers “So I purchased two new identical HD’s cloned the good one, and rebuilt the RAID1” but used the old software “This machine was built circa 2007, and has Slackware Linux of that era installed”.
Anthony. I’ve been thinking on this, and I have a theory that seems to cover all the facts you’ve posted. It’s not all that likely, but I’ll present it here anyway.
Let me preface it by saying that clearly a lot of stuff works. Your PC passes it’s BIOS quality checks. Otherwise, it just sit there and beep at you. It can find a boot disk, read the first sector, find the boot partition specified in the first sector, locate and load several files — a kernel image (probably called bzImage), an initialization ramdisk (initrd), and an initialization control file (inittab). It’s failing when inittab processing tries to set up “virtual consoles” that allow the keyboard and monitor to pretend to be a terminal. You need one virtual console to log in and run stuff, but for whatever reason Linux traditionally sets up six of them.
It’s certainly possible that something needed to set up the virtual consoles is broken — in which case, you’ll probably need to reinstall. However, I’m thinking along different lines. What I think could have happened is as follows.
1. You have elderly hardware there. It comes from an era before programmable non-volatile RAM (NVRAM) was ubiquitous.
2. The PC boot process requires a little bit of configuration information.That information is stored in NVRAM. From 1985 for two decades or more, NVRAM was implemented as a small static RAM that is powered from a battery when the PC is off. The battery can be either non-rechargeable — typically a CR2032 Lithium coin cell — or rechargeable — typically a small barrel shaped object on the motherboard. Neither kind of battery lasts forever.
3. PC hardware manufacturers try to set things up so that users don’t have to change the configuration information.by 2003 or so, they had gotten pretty good at it. That’s a good thing because tinkering with BIOS setup parameters is not for the faint of heart. But nonetheless it is sometimes necessary. For example, the 2003 era Walmart $200 PC I’m typing this on ran fine with default settings when I got it. But when I upgraded the memory a few years later it would crash every now and then until I slowed down one of the memory timing parameters in BIOS setup.
4. So what I think may have happened is that the guy who set up your PC found that it didn’t run quite right until he tweaked some BIOS setting or other. Problem solved. … Until the “CMOS battery” died. And being a server running 24/7 it continued to run fine. Until the PC was powered off. It only failed when you tried to start it after a period off line.
If my theory is right, it’s entirely possible that nothing you try will quite work. What do you do then? Well you COULD try going into BIOS setup (You push some magic keys while booting — which keys vary with the BIOS. You may see a message telling you what key(s) flash by when the boot is just starting) then resetting everything that looks like it affects timing to the most conservative possible setting. (Needless to say, you should write down the original settings) then rebooting without powering off. If the PC then boots into Linux, replace the battery and either live with a slower machine or spend more time than you really want to tinkering with CMOS settings.
If you are more comfortable with Windows, there is probably a program somewhere that will allow you to read the ext2 file system on your disks from Windows using a $30 or less USB to IDE(?) adaptor. (I think both disks in a RAID1 array are identical and that RAID is only relevant if you are trying to use the disk in a RAID array , but that’s not something I know much about)
Good luck.
The CMOS battery is a credible scenario, one of the first symptoms is the real time clock losing count, so one could look at the BIOS settings and check the clock. However, if it loses important settings you will normally get CMOS checksum error on boot and will need human interaction before it will go any further. This was not reported , so I would guess this did not happen.
Maybe it is possible that this affected the RAID config , giving the impression of a defective disk.
One thing I am curious about is the fact that linux kernel seems to be seeing two identical disks at start up. If this was running configured with a hardware RAID controller I would expect the kernel to see a single disc device, not two. Maybe not all firmware works that way. Since we don’t have anything more than “Adaptec” we can not check it out.
Secondly, from the screen-shot, it seems that linux is trying to provide software RAID in the kernel. Since the 3rd party who configured all this is apparently not available for comment, we could guess that the kernel was configured with RAID support but that was not used if it was indeed done using the firmware RAID. Alternatively, the firmware RAID was not used and it relied on software in the kernel.
Apparently the kernel image file and initrd are found and read correctly by GRUB , I’m guessing a likely default being the first partition of the first scsi disc. It should be noted that this is NOT Linux reading the fs at this point but GRUB, the boot loader.
The kernel is then mounting an ext2 fs without error which means there is one on that partition. The GRUB menu needs to be examined to see which partition it is taking as the ‘root’. It will then try to chroot to that fs replacing the minimalistic on it has in memory. This does not mean that this is the original fs intended if the config has changed causing device names to move and change.
INIT will then start trying to use the root fs. It looks like that partition is corrupted or may not be the intended root fs at all. The content of that fs needs to be examined and checked, which takes us back to the earliest comments that were offered about booting from a rescue disk of some kind.
This is all good fun but a bit of a guessing game unless Anthony wishes to post some more information from the system.
Greg
Mostly I agree. I don’t think you have to have a boot manager to boot Linux. Even if one is needed, a circa 2007 Slackware would give you the option of installing LILO in its install menus with GRUB — if it’s available at all — relegated to some sort optional extra list. Since there doesn’t seem to be a Windows partition, I’d bet on LILO or no boot manager at all.
In any case, I agree that if one can ever get to a bash prompt via normal boot or single user mode or a stand alone cdrom or usb linux system or whatever, the first thing I’d do is run fsck.ext2 (fsck is the Unix equivalent of the MSDOS chkdsk Anthony) on the disk partitions.
No, you do need a bootloader image installed to bootstrap the system, even if it only has one OS to boot, that’s the way PCs work. You are correct Slackware uses LILO, not GRUB.
The fsck check should be done initially with the partition mounted read-only to ensure nothing gets “corrected”. Others have said that but just recapping in case it gets missed.
You’re right that you need to continue the boot process beyond executing the code in the Master Boot Record. I’d assumed that Linux distributions, like the DOSes and Windows 9 defaulted to a simple next stage bootstraper like syslinux in the boot partition Volume Boot Record. But apparently not.
I always tried to preserve the Windows OS that came with the machine so I always installed GRUB or LILO or booted unix from Windows (loadlin) back before Microsoft decided that mere users (even admin) were not to be trusted with that awesome power. It looks like Anthony’s machine probably has nothing to dual boot to so the bootloader will likely be whatever the guy who set it up favored. Maybe LILO. Maybe something he carried around on a CD or floppy.
I also agree that running fsck read-only initially is the proper way to do things. However, that requires tinkering with mount or (worse) fstab. Running on a copy of the disk I personally would probably decide that I was going to try booting from the “repaired” disk no matter how gruesome the fsck output. What’s to lose? (Hmmm. What happens when you run fsck against a RAID array??? That question might be worth pondering for at least a few seconds)
And here we are, three days later…. . I would be nice to hear what happened…. Unless there is some report, it leaves a not-so-nice impression that, apparently, all that good advice, including E. M. Smith’s generous offer to drive up was just *piff* ignored. Bummer.
Hope you are okay, Anthony. Hope the tech issue is resolved.
I agree, because I fix very old computers for poor people for free…(hardware and software), but I have never worked on a system using Linux, so I would be very interested in knowing what exactly failed …
P.S. Hi Janice, you young librarian you !! LOL
I don’t see any evidence of anything Linux related having failed here. There was a hardware failure and our host has lost contact with the guy who put this together for him. He lacks the expertise on Linux himself and apparently did not do any system maintenance for a very long time. Both the h/w and s/w seem to have performed sterling service over the last 10y.
The Linux kernel booted but at the moment of switching to the root fs on the disk it got stuck. The contents of that fs do not hold a valid linux root fs. My guess is that this is not the same partition as was originally being used ( or it have been seriously corrupted ).
Yes, having put out a call for help, the response, or lack thereof, seems a little odd.
Janice,
Anthony contacted me same day as my first post. He had some priority things to do ( data that has been idle for a few years is important, but not time urgent). We agreed to me being available today and forward, so I’m catching up now and prepping to drive, IFF still needed.
So no worries on Anthony and “piff”, OK?
BTW, further up thread Don K & Greg were talking about recovery boot processes and Windows et. al.
One thing I’m going to ask Anthony, when on site, is “Why Linux?”
I love it and live on it, but others prefer Windows. It would be very easy to toss a copy of the data onto an NTFS disk if desired. Part of my kit is a Dogbone Stack of 4 Linux boards ( 3 x Raspberry Pi and one Odroid) with support for most filesystems installed. I usually have ext3, ext4, ntfs, FAT32, and Macintosh file systems mounted at any one time. Useful for moving data cross OS types…
In any case, a modern Linux with NTFS support would allow archiving to a Windows readable disk (with some minor metadata issues so choose wisley the archive format).
Thanks for bringing us up to date. The absence of any response to your offer here led me to think there was some reason our host did not want to take you up on it but did not feel it was something he wanted to make public.
You sound eminently qualified so no need for further suggestions. It would be interesting to know what it was when you find out. I strongly suspect that partitions have moved due to disk swapping and the partition being mounted is not the one where the root fs is .
It sounds like AW would be more comfortable with Windows, though in response to the “why Linux” I would say more secure, more reliable and less likely to get blown out by a forced update being pushed onto the system in the future.
/my2c.
Good luck.
..The failed CMOS battery was already suggested and laughed at !! ..Sigh !!
Butch
March 22, 2017 at 2:01 pm
I had something similar happen to a friends old computer. After he had it unplugged for a few hours during a house move, when he started it back up again, he got a message about not finding the Raid drive…I, went into his BIAS setup and found that the BIAS settings had been reset to Factory Default…The BIAS date was set back to 2004 !!! This was caused by the death of the internal BIAS battery.. I replaced the battery and then reset ONLY the BIAS date to present date and it rebooted fine…Some adjustments later to get everything updated, but the hard drives did work…(and it only cost $1.00 for the battery…LOL)
Reply
ckb
March 22, 2017 at 3:21 pm
The above is a great story and great advice, but I find it hard to believe Anthony got as far as this without noticing the clock was reset in the BIOS!
Never assume what others ( or yourself ) are capable of overlooking. Especially when panicking about losing “irrecoverable” data.
If you installed that system in 2007, then it’s way out of date by now, so forget the the detail and do a fresh install on a new pair of mirrored drives. There have been dozens of updates and patches to linux since then, including security, so it’s far better to be using a current distro if you can. If you can build the raid to include a hot swap spare, do that as well, as it will automatically rebuild the mirror if one of the drives fails. Put both of the old drives to one side and wire them back into the system to recover the data once the new system is built and fully configured. Favourite distros here are Debian and Suse, both of which are very robust and just work out of the box.
I do electronics and embedded computing here and use a variety of os’s; linux, freebsd and solaris, for example. Freebsd includes the zfs file system as standard and is well worth considering for critical systems for that reason alone. Just built a remote network backup server recently using freebsd and it just works. The servers and machines here are in the lab, while the backup server is in the house, so if either burns down, there’s no data loss.
I’m in the uk, but would be glad to help if needed, if only remotely…
Regards,
Chris
Chris: “Freebsd includes the zfs file system as standard and is well worth considering for critical systems for that reason alone. […] I’m in the uk, but would be glad to help if needed, if only remotely”
You may want to seriously consider Chris’ offer to help, Anthony. ZFS will leave you grinning.
I entirely agree anyone farming large collections of critical data will adore ZFS. The administration is dirt simple once you get the hang of it. And you can forget about holes in your data on RAID-like arrays. It’s self-monitoring (can send you an email if there’s a data discrepancy) and self-heals on command. Unless the disc is just busted, in which case you just replace it with another in the array and it immediately reslivers. ZFS mirrors are hot-swappable by definition.
Now also available on Ubuntu 16.04 LTS though not (yet) its native filesystem. But easy enough to download and put to work immediately.
Chris, AFAICT, the reason for wanting to get this hopelessly unmaintained system back up is that the “backup” seems to rely on some specific software ( maybe compression or incremental backups ) which are on the drive and that software will almost certainly not work on an up to date installation.
Some did suggest then installing the old Slackware in a VM but that would be a bare system and our host is apparently very unfamiliar with linux, so that would be an extra burden. If the disk is not corrupted it would be easier if it can be made to boot.
Greg – It may be possible to get one of the 2 drives booted, but the problem with that is that if the drive contents are even slightly corrupted, the boot process may make that worse. If the system crashed, the drives may not be in a sync’ed state and the boot process file system check may not recover from that.
All in all, better to build a clean system, then mount each drive in turn, read only, to recover the data. You can use dd or similar to create an image of the drive to a file, or even another drive…
Chris
Booting from a live DVD would be a “clean system”. That allows the initial ro check of what in one drive copy. With the state of play shown above, it did not get as far as fsck.
In any case I think this has all become academic since Anthony has barely responded to the many comments and offers of help , so I would conclude he has dropped his ideas about recovery.
What’s the problem? Oh, the AGE of the system? Yeah, I am still using my 2003 Compaq Presario with Windows XP, and so far, the only thing that has ever failed me was the HP backup drive that crashed with 3 months worth of photographs of wildflowers and birds that I may be able to recover by having some tech geek go in through the back door to find them. Like an idiot, I did NOT copy any of those pictures to a more reliable backup as a duplicate.
Lesson learned the hard way. Always have a backup that is completely reliable. I’m no computer expert by even a pig’s whisker, but this IS the real disadvantage in relying too much on electronic storage instead of keeping hard copies or discs not attached to the computer. It’s the reason I have print copies of any finished manuscript and the notes that go with it, as well as a copy on disc and/or jump drive from a reliable company.
Anthony, I wish you well. The data is still there. It’s just you who are blocked from accessing it.
“Lesson learned the hard way. Always have a backup that is completely reliable. ”
… and don’t keep it on the same physical medium as the original 😉
Or in the same physical location…
You can get nice 4 TB USB disks now that fit in a sandwich box… Copy to one and put it in a safe deposit box at the bank. Refresh quarterly (or as needed).
Or put it at a friends house if you don’t need the security / privacy of a bank…
Been running slackware since march 95 , , ,
To start with a clean system on a plain primary disk is good advice .
Entire old system can be restored to a chroot tree if need be . ..
you can run 57 flavors of userspace at the same time on a single kernel . .
######################################################
### Linux Mint 17.3 /home/data2/chroot2/LxMint17.3 ###########
/proc /home/data2/chroot2/LxMint17.3/proc auto bind,noauto,ro 0 0
/dev /home/data2/chroot2/LxMint17.3/dev auto bind,noauto 0 0
/sys /home/data2/chroot2/LxMint17.3/sys auto bind,noauto,ro 0 0
/dev/pts /home/data2/chroot2/LxMint17.3/dev/pts auto bind,noauto 0 0
/tmp /home/data2/chroot2/LxMint17.3/tmp auto bind,noauto 0 0
/home /home/data2/chroot2/LxMint17.3/home/homeh auto bind,noauto 0 0
/home/data2 /home/data2/chroot2/LxMint17.3/home/homeh/data2 auto bind,noauto 0 0
/home/data3 /home/data2/chroot2/LxMint17.3/home/homeh/data3 auto bind,noauto 0 0
##########################################
. . . Errors seen make it look like /dev is missing / lib missing etc
Recovery is 1E^42 times easier using another linux system configured
enough to be a comfortable place to be, on the target hardware.or not.
You dont want anywhere near stuff you dont want to lose while your
getting something running . . .
Linux can have multiple root disks, multiple kernels for each,
you can pull the broken system back on board,, fix it,, add it to lilo and boot it,,
slackware is one if the easiest on the planet to do this with because of its simplicity.
. . . no dumbass systemd to get in your way etc . .