only delete space used by inode, on inode deletion; required
[unix-history] / usr / src / sys / ufs / lfs / TODO
CommitLineData
dd860a4a 1# @(#)TODO 5.4 (Berkeley) %G%
af4bfe7d
KB
2
3TODO: =======================
4
5Keith:
6ce705db
KB
6 Fix i_block increment for indirect blocks.
7 If the file system is tar'd, extracted on top of another LFS, the
8 IFILE ain't worth diddly. Is the cleaner writing the IFILE?
9 If not, let's make it read-only.
10 Delete unnecessary source from utils in main-line source tree.
11 Make sure that we're counting meta blocks in the inode i_block count.
12 Overlap the version and nextfree fields in the IFILE
dd860a4a 13 Vinvalbuf (Kirk):
6ce705db
KB
14 Why writing blocks that are no longer useful?
15 Are the semantics of close such that blocks have to be flushed?
16 How specify in the buf chain the blocks that don't need
17 to be written? (Different numbering of indirect blocks.)
af4bfe7d
KB
18
19Margo:
af4bfe7d 20 Unmount; not doing a bgetvp (VHOLD) in lfs_newbuf call.
fd99b936
KB
21 Document in the README file where the checkpoint information is
22 on disk.
af4bfe7d
KB
23 Variable block sizes (Margo/Keith).
24 Switch the byte accounting to sector accounting.
25 Check lfs.h and make sure that the #defines/structures are all
26 actually needed.
27 Add a check in lfs_segment.c so that if the segment is empty,
fd99b936
KB
28 we don't write it. (Margo, do you remember what this
29 meant? TK)
af4bfe7d
KB
30 Need to keep vnode v_numoutput up to date for pending writes?
31
32Carl:
33 lfsck: If delete a file that's being executed, the version number
34 isn't updated, and lfsck has to figure this out; case is the same as if have an inode that no directory references,
35 so the file should be reattached into lost+found.
36 USENIX paper (Carl/Margo).
37 Investigate: clustering of reads (if blocks in the segment are ordered,
38 should read them all) and writes (McVoy paper).
39 Investigate: should the access time be part of the IFILE:
40 pro: theoretically, saves disk writes
41 con: cacheing inodes should obviate this advantage
42 the IFILE is already humongous
43 Cleaner.
44 Recovery/fsck.
45 Port to OSF/1 (Carl/Keith).
46 Currently there's no notion of write error checking.
47 + Failed data/inode writes should be rescheduled (kernel level
48 bad blocking).
49 + Failed superblock writes should cause selection of new
50 superblock for checkpointing.
51
52FUTURE FANTASIES: ============
53
54+ unrm
55 - versioning
56+ transactions
57+ extended cleaner policies
58 - hot/cold data, data placement
59
60==============================
61Problem with the concept of multiple buffer headers referencing the segment:
62Positives:
63 Don't lock down 1 segment per file system of physical memory.
64 Don't copy from buffers to segment memory.
65 Don't tie down the bus to transfer 1M.
66 Works on controllers supporting less than large transfers.
67 Disk can start writing immediately instead of waiting 1/2 rotation
68 and the full transfer.
69Negatives:
70 Have to do segment write then segment summary write, since the latter
71 is what verifies that the segment is okay. (Is there another way
72 to do this?)
73==============================
74
75We don't plan on doing the DIROP log until we try to do roll-forward.
76This is part of what happens if random blocks get trashed and we try to
77recover, i.e. the same information that DIROP tries to provided is
78required for general recovery. I believe that we're going to need an
79fsck-like tool that resolves the disk (possibly a combination of
80resolution, checkpoints and checksums). The problem is that the current
81implementation does not handle the destruction of, for example, the root
82inode.
83==============================
84
85The algorithm for selecting the disk addresses of the super-blocks
86has to be available to the user program which checks the file system.
87
88(Currently in newfs, becomes a common subroutine.)