![]() You'll notice that this drive is within a mdadm RAID array. Here's what I see (look at the top-right for the serial). kernel: Initialized i915 1.6.0 202011:00:02.Disk Utility (sitting in System -> Administration) will give you the serial numbers for all your disks. kernel: md0: detected capacity change from 0 to 1951053824 ![]() kernel: md/raid1:md0: active with 1 out of 2 mirrors kernel: i915 0000:00:02.0: vgaarb: deactivate vga console kernel: Console: switching to colour dummy device 80x25 kernel: fb0: switching to i915 from EFI VGA kernel: sd 1:0:0:0: Write cache: enabled, read cache: enabled, doesn't support DPO or FUA kernel: sd 1:0:0:0: Write Protect is off ![]() kernel: sd 1:0:0:0: 4096-byte physical blocks kernel: sd 1:0:0:0: Attached scsi generic sg0 type 0 kernel: ata1: limiting SATA link speed to 3.0 Gbps kernel: ata1: link is slow to respond, please be patient (ready=0) ![]() kernel: ata1: COMRESET failed (errno=-16) Wondering which files those pseudo devices /dev/loop0-5 make available as a block devices and for what the are used?Ĭode: Select all kernel: ata1: link is slow to respond, please be patient (ready=0) dev/md0p1 2048 1951051775 1951049728 930.3G Linux filesystemītw I'm still working on the procedure on how rebuild the RAID, quite a step learning curve for a Unix beginner Sector size (logical/physical): 512 bytes / 512 bytes and rebooted successfully, now working on the next stepĭisk /dev/loop0: 61.96 MiB, 64970752 bytes, 126896 sectors dev/sda3: Created a new partition 3 of type 'Linux swap' and of size 15.1 GiB.ĭisk identifier: 5230F3BD-69DD-48A0-918B-A5F124D63E95 dev/sda2: Created a new partition 2 of type 'Linux filesystem' and of size 915.5 GiB. dev/sda1: Created a new partition 1 of type 'EFI System' and of size 953 MiB. Sector size (logical/physical): 512 bytes / 4096 bytes Method 2: create a RAID1 array with metadata 1.0 (at the end) on the two EFI partitions, format it as FAT, mount it as /boot/efi, reinstall GRUB without updating the NVRAM and manually create/update EFI boot variables in the NVRAM with efibootmgr.Ĭode: Select all sfdisk -d /dev/sdb | sfdisk /dev/sda -forceĬhecking that no-one is using this disk right now. Method 1: have two independent EFI partitions, manually install GRUB on the second one and create/update EFI boot variables in the NVRAM with efibootmgr. There are basically two different methods. The installer will use only one EFI partition so you will not have boot redundancy. If using LVM, create a volume group with the RAID array and create logical volumes for /, swap, /home. Use the single RAID array as LVM physical volume or use the multiple RAID arrays as /, swap, /home. It is orthogonal to boot redundancy and disk fault tolerance.Ĭreate one (if you will use LVM) or several (if you will not use LVM) RAID partitions on each array.Ĭreate RAID arrays using the RAID partitions. LVM over a single RAID array is more flexible than multiple RAID arrays. Actually I want to be able to boot from one or the other device automatically, if one fails, so I think LMV is not suitable or am I wrong here?
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |