a guest Nov 22nd, 2019 105 Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. Debugging FFS Mount Failures
  2. This report is a continuation of my previous work on Fuzzing Filesystems via AFL.
  3. You can find previous posts where I described the fuzzing (part1, part2) or my EuroBSDcon presentation.
  4. In this part, we won't talk too much about fuzzing itself but I want to describe the process of finding root causes of File system issues and my recent work trying to improve this process. This story begins with a mount issue that I found during my very first run of the AFL, and I presented it during my talk on EuroBSD in Lillehammer.
  6. Invisible Mount point
  7. afl-fuzz: /dev/vnd0: opendisk: Device busy That was the first error that I saw on my setup after couple of seconds of AFL run. I was not sure what exactly was the problem and thought that mount wrapper might cause an issue. Although after a long troubleshooting session I realized that this might be my first found issue. To give the reader a better understanding of the problem without digging too deeply into fuzzer setup or mount process. Let's assume that we have some broken file system image exposed as a block device visible as a /dev/wd1a.
  9. The device can be easily mounted on mount point mnt1, however when we try to unmount it we get an error: error: ls: /mnt1: No such file or directory, and if we try to use raw system call unmount(2) it also end up with the similar error.
  11. However, we can see clearly that the mount point exists with the mount command:
  13. # mount
  14. /dev/wd0a on / type ffs(local)
  15. ...
  16. tmpfson /var/shmtype tmpfs(local)
  17. /dev/vnd0 on /mnt1 type ffs(local)
  18. Thust any lstat(2) based command is trying to convince us that no such directory exists.
  20. # ls / | grep mnt
  21. mnt
  22. mnt1
  24. # ls -alh /mnt1
  25. ls: /mnt1: No such file or directory
  26. # stat /mnt1
  27. stat: /mnt1: lstat: No such file or directory
  28. To understand what is happening we need to dig a little bit deeper than with standard bash tools. First of all mnt1 is a folder created on the root partition at a local filesystem so getdents(2) or dirent(3) should show it as a entry inside dentry structure on the disk. Raw getdents syscall is great tool for checking directory content because it reads the data from the directory structure on disk.
  30. # ./getdents  /
  31. |inode_nr|rec_len|file_type|name_len(name)|
  32. #:   2,      16,    IFDIR,       1 (.)
  33. #:   2,      16,    IFDIR,       2 (..)
  34. #:   5,      24,    IFREG,       6 (.cshrc)
  35. #:   6,      24,    IFREG,       8 (.profile)
  36. #:   7,      24,    IFREG,       8 (boot.cfg)
  37. #: 3574272,  24,    IFDIR,       3 (etc)
  38. ...
  39. #: 3872128,  24,    IFDIR,       3 (mnt)
  40. #: 5315584,  24,    IFDIR,       4 (mnt1)
  41. Getdentries confirms that we have mnt1 as a directory inside the root of our system fs. But, we cannot execute lstat, unmount or any other system-call that require a path to this file. A quick look on definitions of these system calls show their structure:
  43. unmount(const char *dir, int flags);
  44. stat(const char *path, struct stat *sb);
  45. lstat(const char *path, struct stat *sb);
  46. open(const char *path, int flags, ...);
  47. All of these function take as an argument path to the file, which as we know will endup in vfs lookup. How about something that uses filedescryptor? Can we even obtain it? As we saw earlier running open(2) on path also returns EACCES. Looks like without digging inside VFS lookup we will not be able to understand the issue.
  49. Get Filesystem Root
  50. After some debugging and code walk I found the place that caused error. VFS during the name resolution needs to check and switch FS in case of embedded mount points. After the new filesystem is found VFS_ROOT is issued on that particular mount point. VFS_ROOT is translated in case of FFS to the ufs_root which calls vcache with fixed value equal to the inode number of root inode which is 2 for UFS.
  52. #define UFS_ROOTINO     ((ino_t)2)  
  53. Below listning with the code of ufs_root from ufs/ufs/ufs_vfsops.c.
  55. int
  56. ufs_root(struct mount *mp, struct vnode **vpp)
  57. {
  58. ...
  59.         if ((error = VFS_VGET(mp, (ino_t)UFS_ROOTINO, &nvp)) != 0)
  60.                return (error);
  61. By using the debugger, I was able to make sure that the entry with number 2 after hashing does not exist in the vcache. As a next step, I wanted to check the Root inode on the given filesystem image. Filesystem debuggers are good tools to do such checks. NetBSD comes with FSDB which is general-purpose filesystem debugger. Nonetheless, by default FSDB links against fsck_ffs which makes it tied to the FFS.
  63. Filesystem Debugger for the help!
  64. Filesystem debugger is a tool designed to browse on-disk structure and values of particular entries. It helps in understanding the Filesystems issues by giving particular values that the system reads from the disk. Unfortunately, current fsdb_ffs is a bit limited in the amount of information that it exposes. Example output of trying to browse damaged root inode on corrupted FS.
  66. # fsdb -dnF -f ./filesystem.out
  68. ** ./filesystem.out (NO WRITE)
  69. superblock mismatches
  70. ...
  72. clean = 0
  73. isappleufs = 0, dirblksiz = 512
  74. Editing file system `./filesystem.out'
  75. Last Mounted on /mnt
  76. current inode 2: unallocated inode
  78. fsdb (inum: 2)> print
  79. command `print
  80. '
  81. current inode 2: unallocated inode
  82. FSDB Plugin: Print Formatted
  83. Fortunately, fsdb_ffs leaves all necessary interfaces to allows accessing this data with small effort. I implemented a simple plugin that allows browsing all values inside: inodes, superblock and cylinder groups on FFS. There are still a couple of todos that have to be finished, but the current version allows us to review inodes.
  85. fsdb (inum: 2)> pf inode number=2 format=ufs1
  86. command `pf inode number=2 format=ufs1
  87. '
  88. Disk format ufs1inode 2 block: 512
  89.  ----------------------------
  90. di_mode: 0x0                    di_nlink: 0x0
  91. di_size: 0x0                    di_atime: 0x0
  92. di_atimensec: 0x0               di_mtime: 0x0
  93. di_mtimensec: 0x0               di_ctime: 0x0
  94. di_ctimensec: 0x0               di_flags: 0x0
  95. di_blocks: 0x0                  di_gen: 0x6c3122e2
  96. di_uid: 0x0                     di_gid: 0x0
  97. di_modrev: 0x0
  98.  --- inode.di_oldids ---
  99. We can see that the Filesystem image got wiped out most of the root inode fields. For comparison, if we will take a look at root inode from freshly created FS we will see the proper structure. Based on that we can quickly realize that fields: di_mode, di_nlink, di_size, di_blocks are different and can be the root cause.
  101. Disk format ufs1 inode: 2 block: 512
  102.  ----------------------------
  103. di_mode: 0x41ed                 di_nlink: 0x2
  104. di_size: 0x200                  di_atime: 0x0
  105. di_atimensec: 0x0               di_mtime: 0x0
  106. di_mtimensec: 0x0               di_ctime: 0x0
  107. di_ctimensec: 0x0               di_flags: 0x0
  108. di_blocks: 0x1                  di_gen: 0x68881d2c
  109. di_uid: 0x0                     di_gid: 0x0
  110. di_modrev: 0x0
  111.  --- inode.di_oldids ---
  112. From FSDB and incore to source code
  113. First we will summarize what we already know:
  115. unmount fails in namei operation failure due to the corrupted FS
  116. Filesystem has corrupted root inode
  117. Corrupted root inode has fields: di_mode, di_nlink, di_size, di_blocks set to zero
  118. Now we can find a place where inodes are loaded from the disk, this function for FFS is ffs_init_vnode(ump, vp, ino);. This function is called during the loading vnode in vfs layer inside ffs_loadvnode. Quick walkthrough through ffs_loadvnode expose the usage of the field i_mode:
  120.          error = ffs_init_vnode(ump, vp, ino);                                                                                                                                                                                    
  121.          if (error)                                                                                                                                                                                                                
  122.                 return error;                                                                                                                                                                                                    
  124.          ip = VTOI(vp);                                                                                                                                                                                                            
  125.          if (ip->i_mode == 0) {                                                                                                                                                                                                    
  126.                  ffs_deinit_vnode(ump, vp);                                                                                                                                                                                        
  128.                  return ENOENT;                                                                                                                                                                                                    
  129.          }  
  130. This seems to be a source of our problem. Whenever we are loading inode from disk to obtain the vnode, we validate if i_mode is non zero.
  131. In our case root inode is wiped out, what results that vnode is dropped and an error returned. So simply we cannot load any inode with i_mode set to the zero, inode number 2 called root is no different here.
  132. Due to that the VFS_LOADVNODE operation always fails, so lookup does and name resolution will return ENOENT error.
  133. To fix this issue we need a root inode validation on mount step, I created such validation and tested against corrupted filesystem image. The mount return error, which proved the observation that such validation would help.
  135. Conclusions
  136. The following post is a continuation of the project: "Fuzzing Filesystems with kcov and AFL". In this post, I presented the way how fuzzed bugs, which not always shows up as a system panic can be analysed on what tools a programmer can use to do that. Above the investigation described the very first bug that I found by fuzzing mount(2) with Afl+kcov. During that root cause analysis, I realized the need for better tools for debugging Filesystem related issues. Because of that reason, I added small functionality pf (print-formatted) into the fsdb(8), to allow walking through the on-disk structures. The described bug was reported with proposed fix based on validation of the root inode on kern-tech mailing list.
  138. Future work
  139. Tools: I am still progressing with the fuzzing of mount process, however, I do not only focus on the finding bugs but also on tools that can be used for debugging and also doing regression tests. I am planning to add better support for browsing blocks on inode into the fsdb-pf, as well as write functionality that would allow more testing and potential recovery easier.
  140. Fuzzing: In next post, I will show a remote setup of AFL with an example of usage.
  141. I got a suggestion to take a look at FreeBSD UFS validation on mount(2) done by McKusick. I think is worth it to see what else is validated and we can port to NetBSD FFS.
RAW Paste Data
We use cookies for various purposes including analytics. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. OK, I Understand