Kernel Panic VFS Error and TPM Bypass (Part 1)

Yesterday, I ran into a problem with my AWS instance. Whenever I tried to start it up, I kept getting an error message that said something like "Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)" and in my instance status check, i was getting "Instance status checks failed". Even when I tried launching from a backup instance, the same error persisted. Thankfully, I managed to fix it using the following two methods.

Solution 1: Troubleshooting Missing initramfs or initrd Image

Using a Rescue Instance

Warning: This procedure requires stopping the instance. Data stored in instance store volumes is lost when stopping the instance. Backup data before stopping. Unlike Amazon EBS-backed volumes, instance store volumes are ephemeral and don't support data persistence.

  1. Open the Amazon EC2 console.
  2. Navigate to Instances and select the impaired instance.
  3. Choose Actions, Instance State, Stop instance.
  4. In the Storage tab, select the Root device, and then the Volume ID.
    • Note: Create a snapshot of the root volume as a backup.
  5. Choose Actions, Detach Volume (/dev/sda1 or /dev/xvda), then confirm.
  6. Verify that the State is Available.
  7. Launch a new EC2 instance in the same Availability Zone with the same OS as the impaired instance, or use an existing instance with the same AMI and AZ. Note: step 7 is very important
  8. Once the rescue instance launches, go to Volumes, select the detached root volume.
  9. Choose Actions, Attach Volume.
  10. Select the rescue instance ID and enter /dev/xvdf.
  11. Run $ lsblk to verify successful attachment.
    1   $ lsblk
    
  12. Create a mount directory and mount under /mnt:
    1 $ mount -o nouuid /dev/nvme1n1p1 /mnt
    
  13. Invoke a chroot environment:
    1 $ for i in dev proc sys run; do mount -o bind /$i /mnt/$i; done
    
  14. Run chroot on the mounted /mnt filesystem:
    1$ chroot /mnt
    
  15. Run the following commands based on your OS:
  • RPM-based:
1   $ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
2   
3   $ sudo dracut -f -v
  • Debian-based:
1 $ sudo update-grub && sudo update-grub2
2 
3 $ sudo update-initramfs -u -v
  1. Verify initrd or initramfs image presence in /boot directory with corresponding kernel image (e.g., vmlinuz-4.14.138-114.102.amzn2.x86_64 and initramfs-4.14.138-114.102.amzn2.x86_64.img).

  2. Exit and clean up the chroot environment:

    1$ exit
    2$ umount /mnt/{dev,proc,run,sys}
    
  3. Detach root volume from rescue instance and attach to original instance.

  4. Start the original instance.

Checking for initrd or initramfs Image Rebuild Failure

To check if the initrd or initramfs image rebuild fails, run dracut -f -v in the /boot directory. Use the same command to list missing modules. If no errors are found, attempt to reboot the instance. If it reboots successfully, the error is resolved.

If issue is still not resolved, use Solution 2.