Tutorial Week 7 I have been trying to stress to the students that C belongs to the same language paradigm as Ada and Pascal (procedural, imperative) and so the design techniques that they have used for Pascal or Ada programs also work for C, with the main differences being the syntax, the ability to nest expressions and the emphasis that C allows for error checking. So to write a C program they should design an algorithm using structured English (or similar) first and then translate that design into C. If you haven't been doing it that way, could you please do so. Thanks!
  • Discuss the MSDOS file system from these three viewpoints: a) The users view of the file system. b) The Microsoft C programmers API for directory manipulation (see the attached pages from the Microsoft C Programmers Reference manual). c) The underlying implementation of the file system (see the textbook). Ans: a) the user sees a hierarchical file system with mountable file systems (ie they can stick floppies in and out). File names have a maximum length that is too short, and there are no real security measures to protect them against carelessness, viruses (viri?) etc. b) What they need to undestand is the example program at the end of the _dos_find function descriptions, because that is how they traverse directories in MSDOS. It is different to Unix, although it accomplishes much the same thing. _dos_findfirst both does an open of the directory and also does a read to find the first file. Note that this also allows pattern matching of file names. The description also mentions an implementation detail: this uses an interrupt to MSDOS to do the work. The information returned corresponds to the MSDOS directory information and gives you all that the directory has. (In contrast, in Unix you have to make two calls to get file info because such info is stored in inodes, not in the directory.) c) Check that they can find out how to read say the second block of a file, given a pathname such as C:\windows\win.com
  • How do MSDOS utilities such as Norton's ``undelete'' work. This utility attempts to recover a deleted file. How effective would this be in a multi-user timesharing environment? How could a file undelete program work in such an environment? Ans: When a file is deleted, the first character of the file name is overwritten to show that the file is deleted. The actual blocks are not cleared. This allows the utility to restore the file as long as none of the freed blocks have been overwritten by later calls. This works because MSDOS users often ``freeze'' when they have incorrectly removed a file so it has a good chance of not being overwritten. In a timesharing environment this will not help, because the longer you wait the more chance there is that another process overwrites the file. Undelete could only work by making a copy in a secret place and then deleting it later. Undelete would just restore from the copy.
  • When a file is erased its blocks are generally put back on the free list, but they are not erased. Do you think it would be a good idea to have the Operating System erase each block before releasing it? Consider both security and performance factors in your answer, and explain the effect of each. Ans: It would slow performance because all blocks would have to be cleared. It is a security hole if they are not cleared because the information may still remain. It depends on your secrecy requirements (eg drug dealers would probably want full erasure, the cops don't).
  • Some systems provide a call to rename a file. Is there any difference between using such a function or just copying the old file and then deleting it? Ans: Copying takes extra time. It also uses up extra file space while the copy is made. This may make it impossible to rename a file this way.
  • An Operating System may allow filenames of unlimited length, only upto a maximum length or of a fixed length. Discuss how this affects entries in directory files with particular reference to adding and deleting files from the directory and searching for filenames in the directory. Ans: Fixed length makes implementation easy because you can use fixed size directory records. This makes deleting and reusing the space easy. It also allows easier searching, because you can get from one record to the next quickly. It would be very tedious for a user if every filename had to have the same number of characters in it! A maximum length would have the same advantages for implementation if stored as fixed size and would be nicer for the user, but would waste disk space. If variable sized records were used (they would have to be in the first case) then deletion of files would leave variable-sized ``holes'' in the directory that would require compaction at times. Laboratory Week 7
  • Write a Unix program to open, read and close a directory using the system calls ``opendir'', ``readdir'' and ``closedir''. Print the names of the files in the directory. Ans: This is for the current directory: #include #include int main(void) { DIR *dp; struct dirent *direntry; if ((dp = opendir(".")) == NULL) { fprintf(stderr, "cant open .\n"); exit(1); } while ((direntry = readdir(dp)) != NULL) puts(direntry->d_name); closedir(dp); exit(0); }
  • Write a Unix program that will read a filename and use the system call ``stat'' to gain information about the file. Use the macro ``S_ISDIR'' to test if the file is actually a directory. Ans: #include #include #include #include int main(void) { struct stat buf; char name[MAXPATHLEN]; printf("Enter a file name:"); gets(name); if (stat(name, &buf) != 0) { fprintf(stderr, "cant stat %s\n", name); exit(1); } if (S_ISDIR(buf.st_mode)) printf("%s is a directory\n", name); else printf("%s is not a directory\n", name); exit(0); }