+Network Status November 5, 1980
+
+BUGS
+----
+-- Various response messages are lost. This includes "fetching"
+ files when the file being retrieved never arrives. I suspect
+ this has something to do with unreliable delivery of error messages,
+ but this is not reliably reproducible.
+
+-- The net command will create files in the queue directories
+ without the corresponding control files ("dfa..." without "cfa...").
+ Unknown cause. They should be periodically removed.
+ (Perhaps caused by an error such as an invalid machine name.)
+
+-- The network makes no provision for errors in transit on intermediate
+ machines, such as "No more processes" or "File System Overflow".
+ While these occur only rarely, when they do, no message or
+ notification is sent to anyone.
+
+-- The network daemons occasionally core dump. They should not.
+
+
+SUGGESTIONS
+-----------
+
+-- Maintenance Improvements:
+ The network has become large enough to make re-compilation
+ of the source on all machines to become practically impossible.
+ The net command has compiled within it a routing table for each
+ remote machine (defined in config.h).
+ Adding a new machine to the network requires recompiling the
+ net command on ALL machines. The net command should read an
+ external text file to compute its data structures.
+ There is a program patchd, written by Bill Joy, which could
+ be used to patch the binary versions of the network
+ on like-systems, such as the Computer Center machines.
+ The network code should use the retrofit library for
+ non-Version 7 systems.
+
+-- The possibility of a number of small UNIX personal machines wanting
+ intermittent access to the network looms ahead. We should attempt
+ to organize the software to allow occasional use
+ by other UNIX machines, without tying down a port all the time.
+
+-- Bob Fabry has suggested the "machine" be generalized to imply a
+ machine/account pair, e.g. -m caf would imply "caf" on Cory,
+ -m Cory would imply "fabry" on Cory.
+ Environments could provide this information.
+ It has also been suggested that the notion of a "default" machine
+ is too restrictive and that each type of command should have a
+ default machine, e.g. netlpr to A, net to B, netmail to C, etc.
+
+-- Colin has developed some data compression algorithms. On machines
+ which are normally CPU idle, his algorithms could be used to
+ compress data and speed up file transfer.
+ Each individual host could decide whether data should be compressed,
+ and each receiving machine would be able to handle both compressed
+ and uncompressed data.
+
+-- Files being retrieved, or fetched, are created zero-length
+ as the request is sent to the remote machine. An alternative
+ would be to put the message "File being transferred." in the file to
+ make things clearer.
+
+-- File modes should be preserved across the network. Currently
+ they are set to 0600 most of the time.
+
+-- It would be nice if the rcs facilities and commands on various
+ UNIX machines with rcs links were more accessible from machines
+ without an rcs link.
+
+-- The network was not expected to become as large as it has.
+ Not much thought was given to large networks.
+ The netq command only lists queues on the local machine,
+ but many times the user is waiting for long queues on intermediate
+ machines.
+ Likewise, once the request is forwarded to the nearest machine,
+ the netrm command will not let the originator remove the queue file.
+ Finally, a network status command telling people what the network
+ was doing would be very helpful.
+
+-- The underlying protocol is wasteful and/or confusing:
+ * Compute a full checksum on the entire file in addition
+ to the checksum per packet now provided.
+ It is unlikely these will be changed since all the daemons
+ on the network machines would have to be changed at once.
+
+-- The netcp command should allow the user to default one of
+ the filenames to a directory, ala the cp command.
+
+-- File transfers, like remote mail, should be possible from
+ the Berkeley Network to the Arpanet and the Bell Research Net.
+ This is not difficult technically, but requires UNIX-like
+ stream interfaces to be written for the gateways.
+
+-- Currently the network files being transferred are
+ copied into /usr/spool... it would be nice for
+ large files to simply use a pointer to them.
+ (To save time and space).
+
+-- The scheduler the daemon uses is very simple.
+ It should have a way to age priorities and to "nice"
+ transfers, to be done after all normal ones are done.
+ Also, there are some network uses that are time-dependent.
+ It would be nice if certain queue files would disappear
+ at certain times, if for example, a remote machine were down,
+ given that they are no longer useful.