Linux Gazette... making Linux just a little more fun! Copyright © 1996 Specialized Systems Consultants, Inc. linux@ssc.com _________________________________________________________________ Welcome to Linux Gazette! (tm) Linux Gazette, a member of the Linux Documentation Project, is an on-line WWW publication that is dedicated to two simple ideas: * Making Linux just a little more fun * Sharing ideas and discoveries The basic idea behind these two concepts is that Linux is one cool OS, whose price for admission is a willingness to read, learn, tinker (aka, hack!), and then share your experiences. The Gazette is a compilation of basic tips, tricks, suggestions, ideas and short articles about Linux designed to make using Linux fun and easy. LG began as a personal project of John M. Fisk, and grew to include contributions freely provided by a growing number of authors. Linux Journal is now publishing the Gazette using material contributed by outside authors (note to potential authors). Without these authors there would not be a Gazette, and I thank them all. Drop a note to the author of anything that you find helpful or instructive--the author's e-mail address is included for this very purpose. Linux Gazette is a non-commercial publication and will remain that way. A tar, gzip file containing all issues of Linux Gazette and one containing the current issue can be found at ftp://ftp.ssc.com/pub/lg/ Thanks to Matt Welsh, coordinator of the Linux Documentation Project, for graciously bringing the Linux Gazette under the auspices of the LDP. The material included in these documents is covered by a designedly liberal copyright: as long as you are using the material for non-commercial purposes, you may do with them as you please. For information regarding copying and distribution of this material read the Copying License. A new table of contents will appear with each issue that will allow you to easily find articles of interest. A search engine is also provided to allow you to search all issues for items relating to a particular subject. Have fun! _________________________________________________________________ * Table of Contents Issue #12 * Table of Contents Issue #11 * Table of Contents Issue #10 * Table of Contents Issue #9 * Table of Contents Issues #1-#8 * Index of All Issues _________________________________________________________________ Search In: [Linux Gazette (TM).......] Search For: ______________________________ ______ _________________________________________________________________ Linux Gazette WWW & FTP Mirror Sites For those readers who are accessing Linux Gazette from outside the U.S. or are having problems with slow connections at a particular site, mirror sites are available worldwide. Thanks to all of the people who have kindly offered the use of their WWW and FTP sites in order to make this possible! _________________________________________________________________ Linux Journal's latest HOT LINUX NEWS! _________________________________________________________________ LINUX INFORMATION Two SSC links that you might find useful. The first is to Linux Journal 's "Hot Linux News" page, and the second is to SSC's Linux Resources page. _________________________________________________________________ LINUX GAZETTE IS PUBLISHED BY: SSC - Publishers of Linux Journal (tm) _________________________________________________________________ Got any great ideas for improvements! Send your comments, criticisms, suggestions and ideas. Linux Gazette, http://www.ssc.com/lg/ This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com LINUX GAZETTE Copyright © 1996 Specialized Systems Consultants, Inc. For information regarding copying and distribution of this material see the Copying License. _________________________________________________________________ TABLE OF CONTENTS ISSUE #12 _________________________________________________________________ * The Front Page * The MailBag * More 2 Cent Tips + Boot Information Display + Console Tricks + Firewalling / Masquerading with 2.0.xx + FTP and /etc/shells + How to Truncate /var/adm/messages + HTML, Use of BODY Attributes + lowerit Shell Script + Removing Users + Root and Passwords + Talk Daemon and Dynamic Addresses + tar Tricks * News Bytes + News in General + Software Announcements * The Adventure of Upgrading to Redhat 4.0, by Randy Appleton * Features of TCSH Shell, by Jesper Kjær Pedersen * FEddi HOWTO (English version), by Manuel Soriano * Graphics Muse, by Michael J. Hammel * InfoZIP Archive Utilities, by Robert G. "Doc" Savage * New Release Reviews, by Larry Ayers + Slang Applications for Linux + Updates to My Previous Reviews + The Yard Rescue Disk Package * Recent Linux Conferences + Unix Expo 1996, by Lydia Kinata + DECUS in Anaheim, by Phil Hughes + Open Systems World/Fed/UNIX, by Gary Moore * Setting Up the Apache Web Server, by Andy Kahn * Weekend Mechanic, by John M. Fisk * The Back Page + About This Month's Authors + Not Linux Graphics Muse Weekend Mechanic _________________________________________________________________ TWDT 1 (text) TWDT 2 (HTML) are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version. Our thanks go to Tushar Teredesai for pasting together the HTML version. _________________________________________________________________ Got any great ideas for improvements! Send your comments, criticisms, suggestions and ideas. _________________________________________________________________ This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ The Mailbag! Write the Gazette at gazette@ssc.com _________________________________________________________________ Date: Thu, 31 Oct 1996 20:11:37 -0500 (EST) Subject: Re: Linux Gazette Issue 11 From: Elliot Lee, sopwith@cuc.edu Nice job, as always! :-) -- Elliot, webmaster@redhat.com (Thanks! --Editor) _________________________________________________________________ Date: Fri, 1 Nov 1996 10:49:21 -0600 (CST) Subject: Search Engine From: "Dan Crowson" dcrowson@cms.cmsc.com Organization: CMS Communications, Inc. Hello: what kind of search engine are you using for the Linux Gazette www server? Is this a linux-based engine? Thanks, Dan (Nope. It just builds on Linux --Editor) _________________________________________________________________ Date: Mon, 4 Nov 1996 17:24:30 -0500 Subject: Comments on Issue #11 From: "R. Frank Louden" flouden@fairfield.home.sweet.net I am always glad to see another issue of LG. Thank you for taking the time to compose it. One comment I'd like to make is the most recent issue (#11) is difficult for me to read on the spiral binding background. For me, the text lies over close enough to the left edge of the page, and it is almost hidden in some parts of the page. I may be one of a dying breed but I choose to use Mosaic and wish others would consider that MS and Netscape do NOT adhere to the HTML specs and are fragmenting the standards. I note that NCSA is working on a new version that will provide support that is not currently found in the version I use. I am at this moment using an unsupported version 2.7b5 (it's kinda buggy) but when it works it allows me to see the background you have used. While whirly-gigs and gewgaws are nice, some of us are still not able to upgrade hardware at the whim of the industry and need to have some consideration from those who sponsor WWW HTML documents. I have accessed pages that are completely illegible (with my old Mosaic) and others (with a more up-to-date browser) that take prohibitively long times to download. There IS something to be said for standards. Thanks again for the Gazette! It is great!!! (There may be more than one problem here. First off, if you are using a mirror site, the problem is my fault. Somehow, when building the tar file for the mirror sites, a gif that was integral to the notebook motif -- it moved the print away from the spiral -- was left out. I am in the process of notifying the mirror site where the missing file can be downloaded. The notebook spiral was put in using "tables" which is an HTML standard. Here at SSC we too believe in following HTML standards. In fact the program that we use to push things to the web checks that the HTML conforms. I have worried that by adding more graphics we might be causing problems with download times. However, we also would like to keep LG looking good, so thought we'd add away and see what kind of comments we get. So far it's tied. One who likes the spiral and yours against. BTW, if you are accessing LG through a mirror site, try the main site and see if it does better for you (http://www.ssc.com/lg). Glad you like LG, I certainly have fun putting it together. --Editor) _________________________________________________________________ Date: Fri, 01 Nov 1996 13:32:44 +1100 Subject: http://www.redhat.com/linux-info/lg/issue11/wkndmech.html From: Ken Yap ken@syd.dit.csiro.au Hi, Like your Linux Gazette, but some GIFs on the page are not displaying. Path problem? Thanks, Ken (John Fisk forwarded your mail to me. In building the tar file for the mirror sites some files got left out. I have furnished and updated file. Sorry about that. --Editor) Date: Mon, 04 Nov 1996 12:09:22 -0700 Subject: XDM Replacement link incorrect From: "Kevin J. Butler" butler@byu.edu Organization: Novell, Inc. In Issue 11 there is an incorrect link. On the page: http://www.ssc.com/lg/issue11/lg_tips11.html#xdm The link currently is: http://www.ssc.com/lg/issue11/alienor.fr/~pierre/index_us.html But should be: http://alienor.fr/~pierre/index_us.html Thanks for a great 'zine! :-) kb (Got it fixed. Thanks for letting me know. --Editor) _________________________________________________________________ Date: Mon, 4 Nov 1996 22:35:04 +0200 (EET) Subject: Re: Linux Gazette Issue 11 From: Lialios Dionysios ancient@eexi.gr Hello, this is Dennis from Greece. Well this time I managed to download the whole thing so now I have a full mirror. The only problem is that I didn't get (or I don't have) the searchbtn.gif and the htsearch.cgi that are used for the search engine. Did I make something wrong or should I have something I don't? Thank you in advance. Dennis (No, you did nothing wrong. I was so excited to have the search engine, I forgot that the mirrors wouldn't have the proper data bases. Since these data bases are very big and are for all of the SSC site, we have changed the links for the data base so that it always refers back to the SSC site rather than a relative address pointing to the mirror site. The updated front page file is in the update tar file along with the missing files. Let me know if it works for you. --Editor) _________________________________________________________________ Date: Tue, 5 Nov 1996 09:01:44 -0800 (PST) Subject: Request From: ivan.m@ieee.udistrital.edu.co (Ivan Mauricio Montenegro) It's the first time I hear about Linux Gazette, I'd want to have all the issues, but at the FTP addresses that appear on www.ssc.com have the horrible message "Login Error". What could I do? Thanks! Ivan Mauricio Montenegro IEEE Student Branch, Vice-Chairperson Distrital University, "Francisco Jos de Caldas", Bogota, Colombia (Not sure why you are having a problem. I can tell that others are able to download from that address without problem. Are you using your browser to point to that address or logging on with anonymous ftp? I would suggest using a ftp mirror site that is closer to you. Unfortunately, Linux Gazette does not have a mirror site in South America at this time. There is one in Mexico which is somewhat closer to you than Seattle. At any rate if you go to the Mirror Site page (http://www.ssc.com/lg/mirrors.html) in Linux Gazette, and use the links there to go to one of the ftp sites (ours or one of the mirrors), you shouldn't be asked for a login. (I never have been and that's why I am a little confused by the message you are getting.) Let me know if you continue to have problems, and thanks for writing. --Editor) _________________________________________________________________ Date: Wed, 06 Nov 1996 21:21:59 -0500 Subject: Great new look From: "Alan L. Waller" alwaller@shore.intercom.net Classy !!! Al (Thanks! Glad you like it. --Editor) _________________________________________________________________ Date: Sat, 09 Nov 1996 11:03:26 -0800 Subject: Thank you From: Innocent Bystander innocent@dopey.4dcomm.com Thank you very, very much for providing LG to people such as I, who haven't become Unix gods yet. After reading my first issue, I am now a dedicated reader. What can *I* do to assist LG? Innocent Bystander, innocent@dopey.4dcomm.com San Diego, CA (Send us your favorite tips and tricks. We love new contributors. Other than that tell all your friends about us and promote Linux where ever you are. --Editor) _________________________________________________________________ Date: Mon, 11 Nov 1996 08:08:08 -0600 (CST) Subject: Re: Great Writing To: "Lowe, Jimmy, D MSGT LGMPD" LOWEJ@SSG.GUNTER.AF.MIL From: "John M. Fisk" fiskjm@ctrvax.Vanderbilt.Edu Hello Jimmy! Thanks so much for taking the time to write! I appreciate it. I honestly can't take the credit for this -- the kind folks at SSC (and the Linux Journal) offered to take over the management of the LG when its administrative upkeep just got to be too much. Marjorie Richardson is its capable new Editor. I've taken the liberty of cc'ing a copy of this to her -- definitely deserves a pat on the back. Thanks again and Best Wishes, John --------------------------- On Thu, 7 Nov 1996, Lowe, Jimmy, D MSGT LGMPD wrote: > Hello John, > > I just wanted to say how glad I am to see the LG is being carried on > in such a fine manner -- during the summer I began to worry a small but > inspiring story was coming to an end. I think your writing is very > entertaining and informative! I really appreciate your work and that of > all the others in the Linux community and others (e.g. FSF). > > I hope to give back to this wonderful community of dedicated > hobbiest/computer wizards once I get a little more up-to-speed. > > Hope you and your family are well, > > Jim Lowe, Montgomery AL (I think John was being a little modest on this one. Jim was obviously glad to see John's new Weekend Mechanic column in Linux Gazette. I certainly was. Thanks a lot John. --Editor) _________________________________________________________________ Date: Sat, 09 Nov 1996 11:30:32 -0500 Subject: Bravo! From: "J.M. Paden" jmpaden@mnsinc.com "TWDT" is most appreciated. Thanks for the response to your readers requests. Regards, (You're welcome. We do aim to please. --Editor.) _________________________________________________________________ Date: Wed, 20 Nov 1996 13:41:36 -0800 Subject: Link to other Linux pages From: "J. Hunter Heinlen" dracus@third-wave.com Greetings.... I've gone through your title page for the Linux Gazette, and could not find a link to other Linux pages. Please put a link to page with links to other, commonly used Linux pages just below the Mirror sites link, and ask those that you give links for to provide links to you. This will make finding information much easier. Thank you for your time. (I'm not sure which are the commonly used Linux pages you'd like to have a link for on the LG front page. I have added a link to SSC's Linux Resources page at http://www.ssc.com/linux/. Why don't you look at that page and see if it has the links you are wanting. Let me know what you think. Thanks for writing. --Editor) _________________________________________________________________ Date: Tue, 19 Nov 1996 08:59:14 -0500br Subject: LG width From: Gerr gerr@lag.cts.du.edu Hi there. Just a suggestion about the page (which looks ... wow ... compared to before). If you could, however, try to keep it inside of one page wide, it would be wonderful. I find myself having to use the arrows to see what's on the end of lines on the right hand side of the page.. -- gerr@weaveworld (Thank you for writing. I didn't realize it was running over. I use a rather large window for viewing it myself. The problem seems to be a combination of the spiral and the width of the text inside the
tags. Not sure what can be done, but we'll look into it. --Editor) _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Next This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com Copyright © 1996 Specialized Systems Consultants, Inc. "Linux Gazette...making Linux just a little more fun! " _________________________________________________________________ MORE 2¢ TIPS! Send Linux Tips and Tricks to gazette@ssc.com _________________________________________________________________ CONTENTS: * Boot Information Display * Console Tricks * Firewalling / Masquerading with 2.0.xx * FTP and /etc/shells * How to Truncate /var/adm/messages * HTML, Use of BODY Attributes * lowerit Shell Script * Removing Users * Root and Passwords * Talk Daemon and Dynamic Addresses * tar Tricks _________________________________________________________________ BOOT INFORMATION DISPLAY Date: Fri, 1 Nov 1996 09:58:52 -0800 (PST) From: Laurie Lynne Tucker dmesg | more -- Forget (or couldn't look fast enough) at boot time? This command will display your boot information (a.k.a., the "kernel ring buffer"). For more info, see the man page. _________________________________________________________________ A 2 CENT CONSOLE TRICK Date: Fri, 08 Nov 1996 03:42:27 -0800 From: Igor Markov imarkov@math.ucla.edu Organization: UCLA, Department of Mathematics Hi, Here's my 2c console trick: I put the following line into my ~/.xsession file: nxterm -ls -geometry 80x5+45+705 -rv -sb -name "System mesages" -fn 5x7 -T "System messages" -e tail -f /var/log/messages & and this one into my .fvwm: Style "System messages" NoTitle, Sticky, WindowListSkip When I login, I have a small 5-line (but scrollable) window near the left bottom corner (you may need to change numbers in -geometry) where system messages appear in tiny font as soon as they are produced. This lets me see when my dial-up script succeeds, when someone logs into my computer via TCP/IP, when some system error happen etc. The .fvwm setup strips the title bar and does other useful things, but is not necessary. Caveat: if you leave this window for long time, a cron job which trims /var/log/messages will change the inode # for the file and tail -f is bound to freeze. In 99% this cron job wakes up 2-3am, so tail freeze may freeze only overnight. Login/logout and everything will be OK. Any other ideas? Igor _________________________________________________________________ FIREWALLING / MASQUERADING WITH 2.0.XX Date: Sat, 2 Nov 1996 10:57:30 -0500 (EST) From: Preston Brown pbrown@econ.yale.edu Regarding the recent message about not being able to get IP masquerading working with 2.0.xx kernels: First, I *believe* that IP forwarding may have to be enabled for firewall support, but I can't say for sure. Suffice to say that I have forwarding, firewalling, and masquerading all compiled into my kernel. I have a PPP link set up to the outside world, and my local ethernet subnet (192.168.2.x) is masquerades so it can talk to the outside world as well. ipfwadm is used to set up the information (I call it from /etc/rc.d/rc.local at boot time): # ip forwarding policies ipfwadm -F -p deny ; default policy is to deny ; forwarding to all hosts. ipfwadm -F -a m -S 192.168.2.0/24 ; add an entry for masquerading of ; my local subnet modprobe ip_masq_ftp ; load ftp support module a 'ipfwadm -F -l' (i.e. list all forwarding policies) yields: IP firewall forward rules, default policy: deny type prot source destination ports acc/m all 192.168.2.0/24 anywhere n/a Indicating that all is fine. Your local subnet now should be set up to talk to the outside world just fine. --- -Preston Brown, preston.brown@yale.edu _________________________________________________________________ FTP AND /ETC/SHELLS Date: Fri, 1 Nov 1996 09:58:52 -0800 (PST) From: Laurie Lynne Tucker A user's shell must be included in the list at /etc/shells for ftp to work!!!!! (by default, you get only /bin/sh and /bin/bash!) -- laurie _________________________________________________________________ HOW TO TRUNCATE /VAR/ADM/MESSAGES Date: Fri, 1 Nov 1996 09:58:52 -0800 (PST) From: Alex In answer to the question: What is the proper way to close and reopen a new /var/adm/messages file from a running system? Step one: rename the file. Syslog will still be writing in it after renaming so you don't lose messages. Step two: create a new one. After re-initializing syslogd it will be used. Step three: Make syslog use the new file. Do not restart it, just re-initialize. 1. mv /var/adm/messages /var/adm/messages.prev 2. touch /var/adm/messages 3. kill -1 pid-of-syslogd This should work on a decent Unix(like) system, and I know Linux is one of them. _________________________________________________________________ HTML, USE OF BODY ATTRIBUTES Date: Thu, 14 Nov 1996 12:55:15 -0500 From: "Michael O'Keefe", michael.okeefe@lmc.ericsson.se Organization: Ericsson Research Canada G'day, If you are going to use any of the attributes to the tag, then you should supply all of the attributes, even if you supply just the default values. The default tag for Netscape, Mosaic and MSIE is If you wish to slip the BACKGROUND attribute in there, by all means continue to do so, but for completeness (and good HTML designing) you should supply the other attributes as well. The reason? You don't know what colors the user has set, and whether just setting a BACKGROUND image, or just a few of the colors will render the page viewable or not. By supplying all of the values, even at their defaults, you ensure that everything contrasts accordingly -- Michael O'Keefe |Michael.OKeefe@lmc.ericsson.se_ Lived on and Rode a Honda CBR1000F-H |okeefe@odyssee.net / | "It can't rain all the time" |Work:+1 514 345 7900 X5030 / | - The Crow - R.I.P. Brandon |Fax :+1 514 345 7980 /_p_| My views are MINE ALONE, blah blah, |Home:+1 514 684 8674 \`O'| yackety yack - don't come back |Fax :+1 514 684 8674(PCon?)_/_\|_, _________________________________________________________________ "LOWERIT" SHELL SCRIPT Date: Fri, 1 Nov 1996 09:58:52 -0800 (PST) From: Phil Hughes, phil@ssc.com Here is a handy-dandy little shell script. It takes all the plain files (not directories) in the current directory and changes their names to lower case. Very handy when you unzip a bunch of MS-DOS files. If a name change would result in overwriting an existing file the script asks you before doing the overwrite. --------------------------- cut here ----------------------------------- #!/bin/sh # lowerit # convert all file names in the current directory to lower case # only operates on plain files--does not change the name of directories # will ask for verification before overwriting an existing file for x in `ls` do if [ ! -f $x ]; then continue fi lc=`echo $x | tr '[A-Z]' '[a-z]'` if [ $lc != $x ]; then mv -i $x $lc fi done _________________________________________________________________ REMOVING USERS Date: 11 Nov 1996 18:54:02 GMT From: Geoff Short, grs100@york.ac.uk To remove users do the following: Simple setups: * Delete password entry for user from /etc/passwd * Remove user's files using rm -r /home/user * Reboot (if any processes still running) More complex setups: * http://kipper.york.ac.uk/rmuser.html Geoff ---------------------------------------------------------------------------- Ever sit and watch ants? They're always busy with grs100@york.ac.uk something, never stop for a moment. I just geoff@kipper.york.ac.uk can't identify with that kind of work ethic. http://kipper.york.ac.uk/~geoff ---------------------------------------------------------------------------- _________________________________________________________________ ROOT AND PASSWORDS Date: Fri, 1 Nov 1996 09:58:52 -0800 (PST) From: Steve Mann smann@ultrix.ramapo.edu Subject: Re: Root and passwords If you have forgotten your root password: 1. Use a boot disk. 2. Login as root. 3. Mount the partition with your Linux. 4. Edit the second field, which is the encrypted password, of /etc/passwd to show nothing. It would look something like this: root::0:0:root,,,:/:/bin/zsh instead of something like this: wimpy:GoqTFXl3f:0:0:Steve:/root:/bin/zsh You should then be able to login as root with no password at all. Steve ================================================================== / Steve M Insignificant message goes here \ | CCIS: 529-7500 x7922 \|||/ | | Home: 722-1632 0 * | | Beeper: 1-800-502-2775 or 201-909-1575 oo0 ^ 0oo | | Email: smann@ultrix.ramapo.edu ~~~~~~~~~ | | Ramapo College Apartments (Cypress Q): 934-9357 \ This line left blank for no reason / ================================================================= _________________________________________________________________ TALK DAEMON AND DYNAMIC ADDRESSES Date: 11 Nov 1996 16:33:02 GMT From: Adam Jenkins, ajenkins@kalgoorlie.cs.umass.edu Organization: CMPSCI Department, UMass Amherst Having problems sending a talk request to an IP-address other than your own? The solution is to reset your host name to your new dynamic address. You need to figure out what dynamic address you've been assigned. Then you can use the "host" command to find the symbolic name for it, and then use the "hostname" command to reset your machine's hostname. Like this: host 128.119.220.0a Prints out a name. Use it in: hostname name.domain.edu That's it. You need to be root to run the "hostname" command with an argument. If you're using pppd to get your connection, you can put all of this into your /etc/ppp/ip-up script -- read the pppd man page for more info -- so that it will get done automatically when you log in. The reason you need to do this is because when talk sends a talk request, it also sends along what it thinks is the return address so that the remote talk can respond. So if your local machine has a fake address, the remote talk will get that as the return address and you'll never see the response. I also saw a patched version of talk on sunsite somewhere, where he made some hack to talk to get it to find your real address. But I like the "hostname" solution better because I've found at least one other program with the same problem, and the "hostname" solution fixes it too. _________________________________________________________________ TAR TRICKS Date: Tue, 12 Nov 1996 15:01:58 +0000 From: Dominic Binks dominic.binks@aethos.co.uk Organization: AEthos Communication Systems Ltd. A couple of things that interested me about the article on tar. I'm sure that the idea is to introduce pipes, and some of the lesser known unix utilities (tr, cut), but tar -tfvz file.tar.gz | tr -s ' ' | cut -d ' ' -f8 | less can be written more concisely tar tfz file.tar.gz | less Also you can use wild cards so tar tfz file.tar.gz *README* will list all readmes in the file. Finally two last pieces of useful Unix magic. tar cfv - dir will tar the directory dir and send the output to standard output. One piece of magic liked by Unix gurus is tar cfv - dir | (cd dir2; tar xf -) which copies one directory hierarchy to another location. Another piece of tar that might be really useful is that taring up a dos file system and moving it somewhere else will preserve *everything*. This means you can move your main DOS partition around, something that is very difficult to do with DOS. One final tip for all UNIX newbies: you got a file which unix will not allow you to delete. rm -- 'file' will get rid of it. In general -- terminates argument processing so that everything following is passed directly to the executable. Have fun Dominic Binks _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ This page maintained by the Editor of Linux Gazette, gazette@ssc.com Copyright © 1996 Specialized Systems Consultants, Inc. "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ News Bytes CONTENTS: * News in General * Software Announcements _________________________________________________________________ NEWS IN GENERAL _________________________________________________________________ AUTHORS WANTED FOR LINUX JOURNAL Are you interested in Perl, the Internet or Linux? Would you love to see your name in print? Well, then today is your day! Linux Journal is seeking authors for our upcoming issues. We are particularly interested in authors willing to write about Perl, the Web and Linux. We have some general topics we are soliciting articles for listed on our web site at http://www.linuxjournal.com/wanted.html. Please don't let these ideas limit you - if you have a great article idea we'd love to hear about it. For additional information: Gary Moore, Editor Linux Journal, ljeditor@ssc.com http://www.ssc.com/LJ/ Debian Linux SSC is also looking for an author to write a chapter on the installation of Debian Linux for the book Linux Installation and Getting Started by Matt Welsh. If you are interested, please send e-mail to ligs@ssc.com. _________________________________________________________________ CEASE FIRE! Date: Wed 13 Nov 1996 Bill Machrone, vice president of technology for Ziff-Davis Publishing Co, recently wrote in an article about Linux that Netscape 3.0 and Java were not yet available for Linux. He was wrong. Such things happen. Big deal. Even magazines of the highest quality sometimes print things that are wrong. You tell them about it, and they print a correction in the next issue. That's the way professionals handle things. That's not what some Linux people did, however. Instead, they flamed him, in private and in public. That's stupid. They urged others to also send flames to Machrone, which is worse. Things wouldn't be so bad, but now we have the Internet. The Internet allows just a few idiots completely ruin the reputation of Linux. Please, if you want to advocate Linux, be civil. Lars Wirzenius, Moderator, comp.os.linux.announce Bruce Perens, Project Leader, Debian GNU/Linux Distribution Alan Cox, Linux Networking Project, Linux International Technical Board _________________________________________________________________ LINUX IN THE NEWS For the latest article about Linux by Bill Machrone, see the November 11 issue of PC Week, "Up Periscope". This is a good article in which he requests feedback from Linux users. "The Linux Software Map" Unix Review, January, 1997, discusses the need for Linux documentation and the Linux Software Map (LSM). From Martin Michlmayr of Linux International we learn: According to a survey among a partial readership of iX, a German magazine devoted to Unix and networking, Linux is used at work by 45% of the readers. Solaris 1 and 2 taken together come second with 36%, followed by HP-UX with 27%. 56% of companies with less than 50 employees use Linux whereas it is used by 38% of firms with more than 1,000 employees. In addition, 60% of the readers use Linux on their computers at home. Linux International, bod@li.org _________________________________________________________________ LINUX APPLICATIONS AND UTILITIES LIST Date: 30 Oct 1996 The October 22, 1996 edition of the ***** LINUX APPLICATION AND UTILITIES LIST ***** is now available at it's home site and mirrors. The "Linux Applications and Utilities List" is an organized collection of pointers to the WWW home pages of almost 600 different Linux compatible application programs, system administration tools, utilities, device drivers, games, servers, programming tools, file, disk and desktop managers, Internet applications, and more. The "Linux Applications and Utilities List" and mirrors can be found at: Home Site U.S.A. (IL): URL:http://www.xnet.com/~blatura/linapps.shtml Bill Latura blatura@xnet.com Runtime Systems _________________________________________________________________ MAN PAGES TO HTML Marc Perkel, marc@ctyme.com, of Computer Tyme Software Lab, http://www.ctype.com/, has written a program to convert Man pages to HTML. Check out this web site with fully indexed man pages: http://www.ctyme.com/linuxdoc.htm This is a popular idea. There is an article coming out in the February issue of Linux Journal by Michael Hamilton, another guy who did this very same type of conversion. Michael's program is called vh-man2html and can be seen at http://www.caldera.com/cgi-bin/man2html. And he tells us of yet another page, http://wsinwp01.win.tue.nl:1234/maninfo.html, where converters can be found. _________________________________________________________________ MISSION CRITICAL LINUX PROJECT The "Mission Critical Linux Project" was created to document successful existing Linux systems which have a large load and 24 hour a day use. The survey will last until February 1, 1997. If you could access our web site, please visit one of following: * Japan * United States * Italy * The Netherlands * United States * Romania * Japan You can also see brief summary of answers. For additional information: Motoharu Kubo, mkubo@st.rim.or.jp http://www.st.rim.or.jp/~mkubo/ (English page under construction) _________________________________________________________________ NEW LINUX RESOURCE SITES A couple of new Linux Resources sites: Russ Spooner, russl@rmplc.co.uk http://www.pssltd.co.uk/kontagx/linux/index.html Joe Hohertz, support@golden.net http://www.golden.net/~jhohertz _________________________________________________________________ SLOVENIAN HOWTO 1.0 Date: Thu, 07 Nov 1996 The first ever version of Slovenian HOWTO is released. The document addresses Linux localization issues specific to Slovenian users and is written in Slovene. It can be accessed either on its "locus classicus": http://sizif.mf.uni-lj.si/linux/cee/Slovenian-HOWTO.html or the official Linux Documentation Project Site: http://sunsite.unc.edu/mdw/HOWTO/Slovenian-HOWTO.html or any of the numerous mirrors of the latter. For additional information: Primoz Peterlin, peterlin@biofiz.mf.uni-lj.si Institut za biofiziko MF, Lipiceva 2, SLO-1105 Ljubljana, Slovenija, http://sizif.mf.uni-lj.si/~peterlin/ _________________________________________________________________ SOFTWARE ANNOUNCEMENTS _________________________________________________________________ AMIGA DEVELOPMENT ENVIRONMENT Date: Mon, 18 Nov 1996 Tempe, Arizona - Cronus has announced the release of the long awaited Geek Gadgets CD-ROM. Geek Gadgets contains the Amiga Developers Environment (ADE) which is a project conceived and managed by Cronus to produce and support Amiga ports of dozens of the most popular development tools and utilities from the Free Software Foundation, BSD and other sources. This CD contains all the tools necessary to get started programming on the Amiga including advanced C, C++, Fortran and ADA compilers, assembler, linker, EMACS editor, "make", source code control systems (rcs&cvs), text and file utilities, GNU debugger, text formatters (groff & TEX) and more. Geek Gadgets is the perfect companion to the AT Developers CD which contains documentation and utilities but no development tools. Released quarterly, Geek Gadgets provides a quick and cost effective way to obtain the latest ADE for those with slow and/or expensive Internet connections. As a bonus, all the tools can be run directly from the CD-ROM without the need to install any files on your hard drive. Available from your local Amiga dealer or directly from Cronus. SRP $ 24.95 For additional information: Michelle Fish, mic@ninemoons.com _________________________________________________________________ OBJECTIVE-C 4.3.4 FOR LINUX Date: 30 Oct 1996 Release "4.3.4" of the Stepstone Objective C compiler is now available from System Essentials Limited for Linux versions 1.2.13 and higher. See: http://www.nai.net/~lerman Both Linux and OSF/1 Objective C 4.3.4 releases include: * compiler-chain driver script (objcc) * executable of the Objective C compiler (objcc.exe) * source of the original Objective C runtime library * sources of the ICpak101 Objective C foundation classes * man pages for both objcc and objcc.exe * tutorial program For additional information: Kenneth Lerman, Kenneth.Lerman@lerman.nai.net Systems Essentials Limited _________________________________________________________________ C++ MATRIX MATH LIBRARY Date: Sun, 03 Nov 1996 MathTools Ltd. is pleased to announce MAT, a Matlab Compatible C++ Matrix Class Library, designed for development of advanced scientific high-level C++ code. Evaluation version of the MAT can be downloaded from our home page, http://www.mathtools.com. The library includes over 300 mathematical functions covering Complex math, Binary and unary operators, Powerful indexing capabilities, Signal processing, File I/O, Linear algebra, String operations and Graphics. For additional information: MathTools Ltd., http://www.mathtools.com info@mathtools.com _________________________________________________________________ FIDOGATE 4.1.1 - FIDO-INTERNET GATEWAY Date: Wed, 13 Nov 1996 04:30:07 GMT FIDOGATE 4.1.1, an update to version 4 of the FIDOGATE package is available. FIDOGATE Version 4 ----------------------- * Fido-Internet Gateway * Fido FTN-FTN Gateway * Fido Mail Processor * Fido File Processor * Fido Areafix/Filefix ----------------------- Internet: - --------- http://www.fido.de/fidogate/ ftp://ftp.fido.de/pub/fidogate/ ftp://sunsite.unc.edu/pub/Linux/system/Fido/ fidogate-4.1.1.tar.gz 657 Kbyte For additional information: Martin Junius, mj@fido.de _________________________________________________________________ FXVOLUME 0.1, A SIMPLE XFORMS VOLUME CONTROL. Date: Wed, 13 Nov 1996 Fxvolume is a simple, no frills volume control designed to sit at the side of your screen and not get in the way. You simply run it, and then ignore it until you need to use it. It controls the level of the master sound device under Linux, using a slider created from the Xforms library. http://www.ee.mu.oz.au/staff/pbd/linux/fxvolume/ Use at your own risk - it has not been widely tested, but seems to work well enough... ;) For additional information: Paul Dwerryhouse, paul@mura.its.unimelb.edu.au University of Melbourne, Australia _________________________________________________________________ THE JAZZ MIDI SEQUENCER VERSION 2.6 Date: Tue, 05 Nov 1996 Announce: The free JAZZ midi sequencer version 2.6 JAZZ is a full size midi sequencer allowing record/play and many edit functions as quantize, copy, transpose ..., multiple undo; two main windows operating on whole tracks and single events; graphic pitch editing, GS sound editing functions and much more ... JAZZ is copyright (C) by Andreas Voss and Per Sigmond, and is distributed under the terms of the GNU GENERAL PUBLIC LICENSE (Gnu GPL). Web site: http://rokke.grm.hia.no/per/jazz.html Linux binary distribution: ftp://rokke.grm.hia.no/pub/midi/jazz/linux-bin/ Files: jazz-bin-v26b-xview.tar.gz, jazz-help-v26b-xview.tar.gz Source code distribution: ftp://rokke.grm.hia.no/pub/midi/jazz/ File: jazz-src-v26b.tar.gz For additional information: Andreas Voss. andreas@avix.rhein-neckar.de Per Sigmond, Per.Sigmond@hia.no Ericsson AS, ETO, etopesi@eto.ericsson.se _________________________________________________________________ UTIL-LINUX 2.6 Date: Wed, 13 Nov 1996 util-linux-2.6.tar.gz (source only distribution) Util-linux is a suite of essential utilities for any Linux system. It's primary audience is system integrators (like the people at Red Hat) and DIY Linux hackers. The rest of you will get a digested version of util-linux installed with no risk to your sanity. Util-linux is attempting to be portable, but the only platform it has been tested much on is Linux/Intel. There have however been integrated several patches for Arm, m68k, and Alpha linux versions. The present version is known to compile on at least Linux 1.2/libc 4.7.5 and Linux 2.0.22/Libc 5.3.12 (the Linux versions I run :-). People are encouraged to make _nice_ patches to util-linux and submit them to util-linux@math.uio.no. Util-Linux 2.6 is immediately available from ftp.math.uio.no:/pub/linux/util-linux-2.6 NOTE: Before installing util-linux. READ the README or risk nuking your system. Thank you. For additional information: Nicolai Langfeldt, janl@ifi.uio.no The popular front against MWM _________________________________________________________________ LYX-0.10.7 - LYX IS A WYSIWYG Date: 30 Oct 1996 LyX-0.10.7 has been uploaded to sunsite. It is also available from ftp://ftp.via.ecp.fr/pub/lyx and from my home page: http://www.lehigh.edu/~dlj0/LyriX.html LyX is a WYSIWYG front-end to LaTeX. It is used much like a word-processor, but LaTeX produces the final document. Figures, tables, mathematical formulas, fonts, headers, etc., are all drawn on-screen essentially as they appear on the final document. Figures (postscript) are placed in the document using a simple menu, as are tables. General text formatting is accomplished by high-level menu choices that automatically set fonts, indentation, spacing, etc., according to general LaTeX rules, and display (essentially) these settings on the screen. None of the power of LaTeX is lost, since you can embed any LaTeX command within a LyX document. Primary-site: sunsite.unc.edu /pub/Linux/apps/editors 501577 lyx-0.10.7-ELF-bin.tar.gz (binary release) 612839 lyx-0.10.7.tar.gz (original source) Copying-policy: GPL For additional information: David L. Johnson, dlj0@lehigh.edu Lehigh University, http://www.lehigh.edu/~dlj0/dlj0.html _________________________________________________________________ MPEGTV PLAYER Date: Tue, 05 Nov 1996 Announcing a new release of MpegTV, the real-time software MPEG Player for Linux (x86 ELF) and FreeBSD. A free version of the MpegTV player can be downloaded from the MpegTV web site at: http://www.mpegtv.com/ Main features: * Nice GUI with slide-bars and buttons (implemented with Xforms). * Plays MPEG-1 SIF bitstreams (352x240 pels) at 30 frames/sec on a P-200. * When the CPU resources are not sufficient, player skips some frames to achieve graceful degradation. * Can be installed as a Web Browser helper application to play MPEG. For additional information: Tristan Savatier, tristan@mpeg.org http://www.mpeg.org _________________________________________________________________ SPELLCASTER ISDN4LINUX ISDN DRIVER BETA Date: Wed, 13 Nov 1996 This message is to announce the public Beta release of the ISDN4Linux driver for SpellCaster ISA ISDN adapters. This beta program is open to anyone who prefers the bleeding edge and just can't wait for MP support. The beta driver currently supports the SpellCaster DataCommute/BRI and TeleCommute/BRI adapters and will also include support for the DataCommute/PRI adapter before the end of the Beta program. You can download the beta driver from: ftp://ftp.spellcast.com/pub/drivers/isdn4linux You require kernel revision. 2.0. You will also need the isdn4k-utils package also available the above mentioned FTP site or ftp.franken.de For additional information: Erik Petersen, erik@spellcast.com _________________________________________________________________ PUBLIC AVAILABILITY OF THE SECOND BETA OF STAROFFICE 3.1 FOR LINUX Date: 30 Oct 1996 Star Division announces the public availability of the second beta version of its office productivity suite, StarOffice 3.1, for Linux/x86. StarOffice 3.1 consists of: * StarWriter 3.1 -- word processor * StarCalc 3.1 -- spreadsheet * StarDraw 3.1 -- drawing and presentation tool * StarImage 3.1 -- image manipulation * StarChart 3.1 -- bar-, pie- and other charts * StarMath 3.1 -- graphical formula editor You will need an ELF system, X11R6 and Motif 2.0 libraries. This beta version expires at January, 1st, 1997. We will make newer beta versions available by then. The final version will be free of charge for private use. The price for commercial use is not yet decided. StarOffice 3.1 can be downloaded from the directory: ftp://sunsite.unc.edu/pub/Linux/apps/staroffice For additional information: Star Division GmbH, http://www.stardivision.de/ Matthias Kalle Dalheimer, mda@stardivision.de Marc Sewtz, mse@stardivision.de _________________________________________________________________ WGET, A WEB MIRRORING TOOL Date: Wed, 13 Nov 1996 Wget 1.4.0 [formerly known as Geturl] is an extensive rewrite of Geturl. Wget should now be easier to debug, maintain and most importantly, use. Wget is a freely available network utility to download files from the World Wide Web using HTTP and FTP. It works non-interactively, thus enabling work in the background, after having logged off. Wget works under almost all modern Unix variants and, unlike many other similar utilities, is written entirely in C, thus requiring no additional software (like Perl). As Wget uses the GNU Autoconf, it is easily built on and ported to other Unix's. Installation procedure is described in the INSTALL file. You can get the latest version of wget at: ftp://gnjilux.cc.fer.hr/pub/unix/util/wget/wget.tar.gz For additional information: Hrvoje Niksic, hniksic@srce.hr SRCE Zagreb, Croatia _________________________________________________________________ WOVEN GOODS FOR LINUX VERSION 1.0 Date: Tue, 05 Nov 1996 Woven Goods for LINUX Version 1.0 Version 1.0 of Woven Goods for LINUX is a collection of World-Wide Web (WWW) Applications and Hypertext-based Information about LINUX. It is ready configured for the Slackware Distribution and currently tested with Version 3.1 (ELF). The Power Linux LST Distribution contains this collection as an integral part with some changes. The five Parts of Woven Goods for LINUX are: * Part 1 -- World-wide Web Browser from Netscape for X11 and Lynx for ASCII terminals. * Part 2 -- LINUX Documents * Part 3 -- Apache World-wide Web Server and documentation, Glimpse Search Engine and more. * Part 4 -- Hypertext Markup Language Editor asWedit * Part 5 -- External Viewers Woven Goods for LINUX is available via anonymous FTP from: ftp://ftp.fokus.gmd.de/pub/Linux/woven The HTML Pages of Woven Goods for LINUX are snap shots of the LINUX Pages at FOKUS - Research Institute of Open Communication Systems and are available from: http://www.fokus.gmd.de/linux For additional information: Lutz Henckel, lutz.henckel@fokus.gmd.de GMD FOKUS, http://www.fokus.gmd.de/usr/hel/ _________________________________________________________________ XLDLAS V0.30 NOW AVAILABLE Date: 30 Oct 1996 Announcing xldlas v0.40 in sunsite's incoming directory: ftp://sunsite.unc.edu/pub/Linux/incoming/xldlas-0.40-srcbin.tgz Soon to be moved to: ftp://sunsite.unc.edu/pub/Linux/apps/math/xldlas-0.40-srcbin.tgz xldlas is for doing statistics. * Based on the xforms library (i.e. looks pretty slick) * Point and click interface to statistical summaries, OLS regression, plotting, correlation analysis, etc. * Experimental curve fitting routine that uses genetic algorithms with some nice visual feedback. * Very handy automatic generating of .tex format log files, including tables and plots. * Online help For additional information: Thor Sigvaldason, thor@netcom.ca http://www.a42.com/~thor/xldlas/ http://sunsite.math.klte.hu/mirrors/xldlas/ _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com Copyright © 1996 Specialized Systems Consultants, Inc. "Linux Gazette...making Linux just a little more fun! " _________________________________________________________________ THE ADVENTURE OF UPGRADING TO REDHAT 4.0 (WITH ADVICE FOR OTHERS) By Randy Appleton, randy@EUCLID.ACS.nmu.edu _________________________________________________________________ Here at Northern Michigan University, we run a Linux lab with 14 workstations. Upgrading from Redhat 3.0 to Redhat 4.0 has been quite an adventure. This article describes the upgrading of one workstation. TIME The first thing to do when upgrading is to free up a significant block of time. We used a day and a night to upgrade one machine. That included downloading the software, making floppy disks, and fixing our errors along the way. In fact, if you're a busy person, and Redhat 3.0 is working fine for you, then you might choose to delay the upgrade, or even avoid it. However, at the Linux Lab at Northern Michigan, we try and stay near the cutting edge, so the upgrade was a must for us. METHOD The next step is to decide your upgrade method. The choices are the same ones from Redhat 3.0: * Upgrade from an NFS mounted directory of files. * Upgrade from a CD-ROM disk. * Upgrade from a spare partition containing the needed files. * Upgrade directly from an FTP site. The quickest and easiest way is to use the CD-ROM drive. This is the only way if you don't have a direct Internet connection, since you cannot download the necessary amount of data through a modem in any reasonable amount of time Since our workstations don't have CD-ROM drives, and do have an excellent Internet connection, we chose to do an FTP install. DOWNLOAD BOOT DISKS Before an FTP install can begin, two disks named boot.img and supp.img must be downloaded from ftp://ftp.redhat.com/pub/redhat/current/i386/images/ . They can be written to the floppy disks with the commands dd if=boot.img of=/dev/fd0 (switch disks) dd if=supp.img of=/dev/fd0 The second disk is only needed for an FTP install. Redhat 3.0 required three disks for all install types, so this change makes a significant savings in user effort. However, we had used the Redhat 3.0 disks as emergency boot disks to correct problems like forgetting the root password (yes, this does happen). The Redhat 4.0 boot disks are missing several important utilities (i.e. tar and vi) so cannot be used for this purpose. Also, notice that these two disks work for any supported hardware configuration. The older Redhat 3.0 required that the user search through a list of boot disks for the correct choice based on his hardware. This search often took more time than the download itself. Redhat 4.0 is much improved in this regard (our favorite new feature). BOOTUP AND HARDWARE CONFIGURATION The first thing you'll see after inserting the boot.img disk and rebooting the computer is a LILO prompt. Just the words: boot: We would have liked more explanation of our choices here. Redhat 3.0 offered a very nice menu of help text that explained the possible parameters and their effects. However, if you just wait in a perplexed fashion long enough, the system will become impatient and boot Linux for you. The first difference you'll notice is that Redhat 4.0 prompts you to describe your hardware. It asks about SCSI controllers and network adapters, showing you a list of possible choices. Behind the scenes the Redhat 4.0 install script loads kernel modules to access your hardware. While this is happening is a good time to switch to virtual console #3 (press F3). This console shows what's happening in more technical detail, describing things like the mounting and unmounting of file systems, and the downloading of files. The older Redhat 3.0 did not have this feature, which we often use to debug problems. You can switch back to the main action by pressing F1. The install scripts also query the user for network information. You should know your IP number, netmask, gateway, hostname, domain name, and name server before starting the install. We notice that Redhat 4.0 creates a default gateway and name server entry based upon your IP number and netmask, but that these defaults are rarely right. Better in our opinion would be to have no default at all than a misleading one. CHOOSING YOUR SOFTWARE Redhat 4.0 will show you a menu of possible software upgrades and additions. This list is essentially the same as Redhat 3.0, except that most packages have increased in version number. The biggest problem we had involved the remote login software (rlogin, in.rlogind, in.rshd and in.telnetd). These have been upgraded to use the P.A.M. library and kerberos. However, we often login into our Linux workstations from older Sun Sparcs that do not run this software suite. For some unexplained reason, the SunOS clients could not access the Linux servers. We solved the problem by simply re-installing the older software. In general, we suggest letting Redhat upgrade everything you might ever use. You should avoid downloading any software you are sure you will not need. Avoiding unneeded software will decreases the total time needed and the probability of network errors during the download. THE LONG LONG DOWNLOAD Step one of the download process is to pick an FTP site. There are many listed here. We started by choosing a site with a fast 'ping time' from us, since ping time is a reasonable approximation of FTP throughput and is quite quick to gather. To find out the ping tome to a site like www.redhat.com, just type: ping www.redhat.com After ping runs for several packets, kill it with C. The average ping time will be shown at the bottom. We saw ping times from 80 - 300 milliseconds. Downloads are four times faster from the best site compared to the worst. It is well worth your time to explore sing ping before picking a site at random. The fastest was the aptly named ftp://ftp.real-time.com/pub/redhat . Unfortunately, they were not accepting FTP connections, so we used ftp://uicarhive.cso.uiuc.edu/pub/systems/linux/distributions/redhat . We could FTP to that site, but the download failed. It seems that the download scripts also want to know the version and architecture of the packages you are trying to download. Therefore, the correct URL is ftp://uicarhive.cso.uiuc.edu/pub/systems/linux/distributions/redhat/cu rrent/i386. That was not obvious from the directions. We suggest that the Redhat folks either change their script to add these subdirectories or make their directions more clear. For us, upgrading required downloading over 300 megabytes. I must say the status screen during the download is quite nice. The biggest problem with it is that it does not show the progress of downloading each package. Since the download was so long, we left it running overnight. Unfortunately, it failed on the download of LILO. The download script then waited for us to press a key acknowledging the error, which meant it stopped downloading some time during the night. Better would be to continue downloading while informing the user of this error. Once the download is finished, and you answer a few simple questions, you get to reboot your computer into Redhat 4.0 (yea!!). THE UPGRADED SYSTEM The first thing we noticed is that the kernel has been upgraded to Linux 2.0.19. Some problems we had before, like our tape drive not working, were fixed with this upgrade. Also, our Adaptec 2740 SCSI controller was accessible for the first time. Java support is included in the upgraded kernel. We discovered the auto-mounter daemon (amd) was running, and had created a directory named /proc. Inside /proc is every computer mountable by your workstation. For example, /proc/foo is the root directory of the host foo, assuming foo will allow outside access. Nice feature!! The ps command has been changed. Formerly, we used 'ps -augx' to see all processes on our system. That command will no longer work. The new equivalent is 'ps -ax'. The passwd command has been changed. In fact, my former password is now considered ill advised, and I've had to pick a new password. The window manager fvwm95 has been included in the upgraded Redhat. Surprisingly, workman, the musical CD player, was not. See http://www.redhat.com/linux-info/pkglist/rh40_i386/all-packages.html for the complete list. Happily, the Redhat 4.0 upgrade left much of our custom configuration intact. For example, we run a custom X server that Redhat left in place, and our NFS mounts as described in /etc/fstab were retained, even though the upgrade did change /etc/fstab to add other entries (like the /net file system). We did have to re-edit /etc/rc.d/rc.local to set our NIS domain. THE ERRATA AND OTHER UPGRADES The errata can be found at http://www.redhat.com/support/docs/rhl/rh40-errata-general.html . It is actually quite long. Basically, the errata is a list of package upgrades to Redhat 4.0, along with a description of applicability. We counted up to 40 packages to download and install, depending on your configuration. That just too many!! Why does not Redhat make these improved packages a part of the latest redhat release, possibly called Redhat 4.0.1? Luckily, the process is quite mechanical, and requires little thought. Just download the needed files, and run rpm -U on them. Netscape has upgraded since we did our original install. Unfortunately, Redhat does not include Netscape, so Netscape must be updated separately. Perhaps there are legal reasons Redhat does not include Netscape, but Redhat does include other non-free software, such as xv. During the upgrade, the install scripts creates backup copies of certain files in /etc/rc.d/rc*.d with the extension ".rpmsave". Once everything is set up correctly, you can delete any files in /etc/rc.d/rc*.d/*.rpmsave. THE FINISHED PRODUCT Overall, the Redhat package is well done. The installation is easier for Redhat than any other Unix we know of. Redhat 4.0 is a collection of small upgrades of many packages from Redhat 3.0. There are only a few new packages (i.e.: fvwm95, TheNextLevel). Overall, our system is much as it was before, but with many small improvements. Unless you have some need to upgrade, or just feel like messing around with your system, we suggest the results may not be worth the effort. Even so, we like Redhat 4.0 very much. _________________________________________________________________ HOT LINKS * The Redhat home page * The author _________________________________________________________________ If you have comments or suggestions, email me at randy@euclid.nmu.edu _________________________________________________________________ Copyright © 1996, Randy Appleton Published in Issue 12 of the Linux Gazette _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun! " _________________________________________________________________ Features of the TCSH Shell By Jesper Kjær Pedersen, blackie@imada.ou.dk _________________________________________________________________ Abstract In this article, I will describe some of the main features of TCSH, which I believe makes it worth using as the primary login shell. This article is not meant to persuade bash users to change! I've never used bash, and by that reason I know very little about it. As some of you surely know, I've created a configuration tool called The Dotfile Generator, which can configure TCSH. I believe that this tool is very handy when one wants to get the most out of TCSH (without reading the manual page a couple of times.) Because of that I'll refer to this tool several times throughout this article to show how it can be used to set up TCSH. _________________________________________________________________ Why is the shell so important? The shell is your interface to executing program, managing files and directories etc. Though very few people are aware of it, one uses the shell very much in the daily work. E.g. completing file names, using history substitution and aliases. The TCSH shell offers all of these features and a few more, which the average user very seldom takes advantages of. With a high knowledge of your shell's power, you may decrease the time you need to spend in the shell, and increase the time spent on the original tasks _________________________________________________________________ Command line completions An important feature that is used by almost all users of a shell is the command line completion. With this feature you don't need to type all the letters of a filename, but only the ambiguous ones. This means that if you wish to edit a file called file.txt, you may only need to type fi and hit the TAB key, then the shell will type the rest of the filename for you. Basically one can complete on files and directories. This means that you can not complete on host names, process id's, options for a given program etc. Another thing you can not do with this type of completion is to complete on directory names only, when typing the argument for the command cd In TCSH, the completion mechanism is enhanced so that it is possible to tell TCSH which list to complete from for each command. This means that you can tell TCSH to complete from a list of host names when completing on the commands rlogin and ping. An alternative is to tell it to complete only on directories when the command is cd. To configure user defined completion with The Dotfile Generator (from now on called TDG) go to the page completion -> userdefined, this will bring up a page which looks like this: [IMAGE] As the command name, you tell TDG which command you wish to define a completion for. In this example it is rm. Next you have to tell TDG which arguments to the command, this completion should apply to. To do this, press the button labeled Position definition. This will bring up a page, which is split in two parts: [IMAGE] In the first part, you tell TDG, that the position definition, should be defined from the index of the argument, which is trying to be completed (the one, where the tab key is pressed.) Here you can tell it that you wish to complete on the first argument, all the arguments except the first one etc. [IMAGE] The alternative to position dependent completion is pattern dependent completion. This means that you can tell TDG, that this completion should only apply if the current word, the previous word or the word before the previous word conform with a given pattern. Now you have to tell the TDG which list to complete from. To do this press the button labeled List. This will bring up a page, where you can select from a lot of different lists. E.g. aliases, user names, or directories. FILES AND DIRECTORIES Four of the lists you can select from are Commands, Directories, File names and Text files. If you give the optional directory to any of these, only elements from this directory is used. PREDEFINED LISTS There are two ways to let completion be from a predefined list. One is to mark the option predefined list, and type all the options in this list. This solution is a bad idea if the list is used several places (e.g. a list of host names) in that case, one should select the list to be located in a variable, and then set this variable in the .tcshrc file. OUTPUT FROM COMMAND In many cases the list should be calculated when the completion takes place. This could e.g. be a list of users located at a given host, or targets in a makefile. To set up such a completion, first develop the command, which return the list to complete from. The command must return the completion list on standard output as a space separated list. When this is done, insert this command in the entry saying Output From Command. Here's a little Perl command, which find the targets in a makefile: perl -ne 'if (/^([^.#][^:]+):/) {print "$1 "}' Makefile If this is inserted in the Entry, one can complete on targets from the file called Makefile, in the current working directory. If someone should think that its only to promote TDG, that I describe TCSH through it, (s)he should take a look at the following line, which is the generated code for the make completion: complete make 'p@*@`perl -ne '"'"'if (/^([^.#][^:]+):/) {print "$1"}'"'"'Makef ile`@' RESTRICT TO PATTERN With user defined completion, you can restrict the files, which are matched, for each command. Here are some very useful examples: Restrict latex to *.{tex,dtx,ins} The latex command will only complete on files ending in .tex, .dtx or .ins Restrict rm to ^*.{tex,html,c,h} This means that you can not complete rm to a .tex, .html, .c or .h file! I've done that a few times, when I e.g. wanted to delete a file called important.c~. Since the file important.c existed tcsh only completed to that name, and.. I deleted the wrong file, because I was to quick :-( ADDITIONAL EXAMPLES Additional examples can be obtained from TDG, if you load the export file distributed with TDG. Please note that if you wish to keep the other pages, you have to tell TDG only to import the page completion/userdefined. This is done on the Details page, which is accessible from the reload page. _________________________________________________________________ Configuring the prompt Configuring the prompt is very easy with TDG. Just enter the menu called prompt. On this page you can configure three prompts: prompt This is the usual prompt, which you see on the command line, where you are about to enter a command. prompt2 This prompt is used in foreach, and while loops, and at lines continuing lines ended with a slash. prompt3 This prompt is used when TCSH tries to help you, when it meet commands it doesn't know (called spell checking.) The prompts are mixed with tokens and ordinary text. The tokens are inserted by clicking on them in the menu below the scrollbar, and the ordinary text is simply typed in. When a token is inserted an indication will be shown in the entry. Here's an example of how this may look: [IMAGE] [IMAGE] As has been discussed in issue6 of the Gazette, some of the prompt may be located in the xterm title bar instead of on the command line. To do this, choose font change and select Xterm. _________________________________________________________________ History The history mechanism of the shell is a valuable thing, which makes it easier to type similar commands after each other. To see a list of the previously executed commands, type history. The following table lists the event specifiers: !nThis refers to the history event, with index n !-nThis refers to the history event, which was executed, n times ago: !-1 for the previous command, !-2 for the one before the previous command etc. !!This refers to the previous command !#This refers to the current command !sThis refers to the most recent command, whose first word begins with the string s !?s?This refers to the most recent command, which contain the sting s With these commands, you can re-execute a command. E.g. just type !!, to re-execute the previous command. This is however often not what you want to do. What you really wants is to re-execute some part of a previous command, with some new elements added. To do this, you can use one of the following word designators, which is appended to the event specifier, with a colon. 0The first word (i.e. the command name) nThe nth word $The last argument %The word matched by an ?s? search x-yArgument range from x to y *All the arguments to the command (equal to ^-$) Now it's possible to get the last argument from the previous command, by typing !!:$. You'll however often see that you very often refer to the previous command, so if no event specifier is given, the previous command is used. This means that instead of writing !!:$, you may only write !$. More words designators exists, and it's even possible to edit the words with different commands. For more information about this and for more examples, please take a look into the tcsh manual [IMAGE] It is possible to expand the history references on the command line before you evaluate them by pressing ESC-SPC or ESC-! (This is: first the escape key, and next the space key or the ! key). On some keyboards you may use the meta key instead of the escape key. I.e. M-SPC (One keystroke!) _________________________________________________________________ Patterns Many operations in the shell often works on many files, e.g. all files ending with .tex or starting with test-. Tcsh has the opportunity to type all these files for you, with file patterns. The following list shows which possibilities there exists: *Match any number of characters ?Match a single character [...]Match any single character in the list [x-y]Match any character within the range of characters from x to y [^...]Match elements, which does not match the list {...}This expands to all the words listed. There's no need that they match. ^...^ in the beginning of a pattern negates the pattern. EXAMPLES match all files ending with .tex *.tex match all files which does not end with .tex ^*.tex match xxxabyy xxxcdeyy and xxxhifjyy xxx{ab,cde,hifj}yy match all .c and .h files *.[ch] or *.{c,h} THE SHELL EXPAND PATTERNS An important thing to be aware of is that it is the shell, which expand the patterns, and not the programs, which is executed with the pattern. An example of this is the program mcopy which copy files from disk. To copy all files, you may wish to use a star as in: mcopy a:* /tmp. This does however not work since the shell will try to expand the star, and since it can not find any files, which starts with a:, it will signal an error. So if you wish to send a star to the program, you have to escape the star: mcopy a:\* . [IMAGE] There exists two very useful key bindings, which can be used with patterns: The first is C-xg, which list all the files matching the pattern, without executing the command. The other is C-x*, which expand the star on the command line. This is especially useful if you e.g. wishes to delete all files ending in .c except important.c, stable.c and another.c. To create a pattern for this, might be very hard, so just use the pattern *.c. Then type C-x*, which will expand *.c to all you .c files. Now it's easy to remove the three files from the list _________________________________________________________________ Aliases When using the shell one will soon recognize that certain commands are typed again and again. The one at top ten is surly ls -la, which list all files in a directory in long form. TCSH has a mechanism to create aliases for commands. This means that you can create an alias for ls -la just called la. Aliases may refer to the arguments of the command line. This means that you can create a command called pack, which take a directory name and pack the directory with tar and gz. etc. Aliases can often be a bit hard to create since one often wants history/variable references expanded at time of use, and not at the definition time. This has been done easier with TDG, so go to the page aliases, to define aliases. If you end up with an alias you can not define on this page, but in tcsh, please send me an email. For more information about aliases, see the tcsh manual _________________________________________________________________ Timing programs Have you ever needed to know how long a program took to run, how much CPU it used etc?. If so, you may recognize the output from the tcsh built-in time command: 0.020u 0.040s 0:00.11 54.5% 0+0k 0+0io 21pf+0w Informative? Yes but... The gnu time command is a bit more understandable: 0.01user 0.08system 0:00.32elapsed 28%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+0minor)pagefaults 0swaps But still... In TDG you can configure the output from the time command on the page called jobs. It looks like this: [IMAGE] As for the prompt, here's an entry once again for mixed tokens and and ordinary text. Remember, if there is something in TDG that you do not understand, help is available by pressing the right mouse button over the given widget. _________________________________________________________________ References As you may have guessed, TDG and this article will help you a lot of the way to use TCSH, BUT you may need to read a bit more to get more out of TCSH, here's a few references: * The Tcsh manual page * The O'Reilly book on tcsh * The Tcsh mailing list (send mail to listserv@mx.gw.com with body text SUBscribe TCSH your name) _________________________________________________________________ Jesper Kjær Pedersen _________________________________________________________________ Copyright © 1996, Jesper Kjær Pedersen Published in Issue 12 of the Linux Gazette _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun! " _________________________________________________________________ Previous Next Table of Contents _________________________________________________________________ FEDDI-COMO Manuel Soriano manu@ctv.es June 29, 1996 v0.5 _________________________________________________________________ The present document derives from the famous feddi.como which comes with the FEddi+bt packages; this paper is based upon version 0.5. _________________________________________________________________ 1. Credits 2. Introduction 3. Installing FEddi * 3.1 User installation fido. * 3.2 Necessary packages * 3.3 mailer installation/configuration. * 3.4 Check and usage. 4. Installation of Binkley. * 4.1 Configuration/Installation of the caller * 4.2 Problems * 4.3 ``Templates''. 5. Messages, collaborations, tricks * 5.1 futility * 5.2 File request (FREQ). * 5.3 Frequent addresses. * 5.4 Scripts and tools. * 5.5 Automation: The personal area. * 5.6 A few `tricks' for those that don't agree with RTFM. * 5.7 Grouping by tens Binkley's appearance: 6. Good bye and conclusion. _________________________________________________________________ Previous Next Table of Contents _________________________________________________________________ Copyright © 1996, Manuel Soriano Published in Issue 12 of the Linux Gazette _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ Previous Next Table of Contents _________________________________________________________________ 1. Credits The original author of the packages FEddi is Oliver Graf, 2:2454/130.69, the original port to bt a *nix is copyright (c) 1992, 1993 by Ben Stuyts, the adaptation to LINUX is copyright (c) 1993 Louis Lagendijk, and the person who made both versions usable is Manuel Soriano, manu@ctv.es. _________________________________________________________________ Previous Next Table of Contents Previous Next Table of Contents _________________________________________________________________ 2. Introduction Welcome as a future fellow of feddi and bt :-) Congratulations for your decision to install this package. It's not too complicated, the only troubles you may run in are some permissions. The sources included in this package have already been patched to grant a smoother working. As well fmbedit as bt show some minor problems, so don't flame as much and think that you didn't pay anything for it. You may contribute correcting bugs. Don't hold them for yourself, share them. Send me patches and will make this software improve. A hint: don't run it under X, the terminal data base doesn't work smoothly, I'm up to fix this. Surely, some day I'll be able to path this :-) (I used to say this would be the next :-DDDDDDDDDDD) I'm in due with: * Alfonso Belloso : 2:344/17.2 (if I remember well) * Jose Luis Sanchez : 2:346/207.17 (for sure) * Pablo Gomez : 2:341/43.12 (fixes for this file and the scripts for the automation of the personal area) * Javier Ruberte : 2:346/401.50 * Jose Carlos Gutierrez : 2:341/45.17 (scripts to compile the nodelist) * Carlos Terron : 2:345/402.23 (patch so ftoss recognizes upper/lower * Francisco Jose Montilla : 2:345/402.22 pacopepe@nova.es(sgml format) * CICCIO C. Simon : ciccio@arrakis.es (english version) At the end of this file you'll find messages with hints, all sent by feddi _________________________________________________________________ Previous Next Table of Contents Previous Next Table of Contents _________________________________________________________________ 3. Installing FEddi 3.1 User installation fido. We'll install fido as a mail user, but you can give it another name. If you see ~/ in this document, we refer to the user's home directory. * file /etc/passwd Include the following line: fido::2004:300::/home/fido:/bin/bash * file /etc/group Include the following line: fido::300:uucp,fido,root 3.2 Necessary packages You'll need: * perl, do ls /usr/bin/perl If not found, install it from disk-set D (Slackware) * ncurses, do ls /usr/lib/libncurses.a If not found, install it from disk-set D (Slackware) 3.3 mailer installation/configuration. Change to the directory /FEddi-0.9pl5 1. Edit the file Makefile, put for variable SRCDIR your fonts' path, e.g.: SRCDIR=/root/trabajo/mailer/FEddi-dev 2. Add to the beginning of the line NODEPRG =: nlfunct.o else it won't compile. 3. make 4. If you get the following error: ncurses.h: No such file or directory Do: ln -s /usr/include/ncurses/curses.h /usr/include/ncurses/ncurses.h 5. su root make install exit 6. It seems that the install utility doesn't copy all of the utilities; do the following: cp utils/* ~/fnet/utility 7. A few files need modification: + File printmsg #!/bin/sh cat | $HOME/fnet/utility/formatmsg | lpr + File exportmsg #!/bin/sh if test $1 = "new" then cat | $HOME/fnet/utility/formatmsg > "$2" else cat | $HOME/fnet/utility/formatmsg >> "$2" fi 8. The fnet directory has the following contents: ./outbound ./msgbase ./copy ./log ./inbound ./utility ./nodelist Create these directories and do the following: chown -R fido.fido fnet 9. Configuration file ~/.feddirc: + Permissions 644 + User/group fido.uucp ; ; This .feddirc was automatically created with config.user ; ; Profile Section ; PROFILE Manuel Soriano 2:346/207.punto net_name the_passwd outbound 2:* 25:946/100.punto other_net_name the_passwd outbound 25:* 93:346/101.punto other_net_name the_passwd outbound 93:* END ; The first line is your main address, the following are subnets, the routing ; fro 25: to 93: is done by means of 2: ; ; ; ; Paths ; MsgBasePath ~/fnet/msgbase/ InboundPath ~/fnet/inbound/ OutboundPath ~/fnet/ UtilityPath ~/fnet/utility Log ~/fnet/log/feddi.log 200 CopyPath ~/fnet/copy/ NodelistPath ~/fnet/nodelist/ ; ; Misc ; Packer /usr/bin/zip -q -m -k -j %s %s ; Editor /usr/bin/vi %s Beep Yes AutoDelEmpty Yes KeepPKT No KeepNL Yes KeepBackups No ShowAllAddr Yes MaxMsgLength 64k QuoteLength 70 ReplySubject No AskForOrigName Yes AutoNextFolder Yes ; ; End of .feddirc ; You may base your configuration on this file, as it works for me without troubles. 10. File ~/fnet/nodelist/fnlcrc dial 34-6- 3 dial 34-6 dial * pointlist ptlstr34 pointlist eu_point nodelist region34 nodelist eu_nodes dial : According to your zone 34-6 (Valencia), 34-1 (Madrid), 34-3 (Barcelona), etc... As pointlist, the different lists of points, you may use the point lists that come from the bbs, without modification. As nodelist, the different lists of nodes, you may use the node lists that come from the bbs, without modification. That's it. 11. Compiling the nodelist/pointlist I'm using the following scripts. They are simple and work. + file ~/fnet/nodelist/compila0 permissions 777 #!/bin/bash unzip lista.zip mv EU_NODOS* eu_nodos mv EU_PUNTO* eu_punto mv PTLSTR34* ptlstr34 mv REGION34* region34 mv SNETLIST* snetlist mv SUBPTLST* subptlst + file ~/fnet/nodelist/compila1 permissions 777 #!/bin/bash rm fnlc.* fnlc This will compile the lists. If you run into troubles, certainly it's about permissions. Check four files, normally the binaries go to /usr/bin 3.4 Check and usage. Check your mail. Look for a mail package you might have for MS/DOS. Put it into the directory ~/fnet/inbound and do ftoss ; futility pack ; futility link This will always be the way to handle your incoming mail. ftoss will create automatically the folder according to your areas. fmbedit If everything went well you'll see the mail of that package on your screen :-) The editor is quite simple and well documented. It looks somewhat like the fmail's editor. Create a message in an area or two and do the following: fscan This will always be the way to handle your outgoing mail. _________________________________________________________________ Previous Next Table of Contents Previous Next Table of Contents _________________________________________________________________ 4. Installation of Binkley. 4.1 Configuration/Installation of the caller 1. The first thing to do is: change directory to /bt do make su root make install you should get in /usr/bin: -rwxr-xr-x 1 root fido 238983 Sep 15 18:04 /usr/bin/bt and in /usr/lib/binkley: -rwxr-xr-x 1 root root 742 Sep 16 10:04 binkley.cfg -rw-r--r-- 1 uucp root 108 Sep 16 10:10 binkley.day -rw-r--r-- 1 root root 12332 Sep 15 16:20 binkley.lng -rw-r--r-- 1 uucp root 124 Mar 20 2029 binkley.scd -rwxr-xr-x 1 root root 14423 Sep 15 16:20 btctl -rwxr-xr-x 1 root root 13813 Sep 15 16:20 btlng -rwxr-xr-x 1 root root 15649 Sep 15 16:20 english.txt -rwsr-xr-x 1 uucp fido 1603 Sep 15 16:20 fido-toconv 2. File /usr/lib/binkley/binkley.cfg FEddiNodelist (1)Port 2 (2)baud 38400 LockBaud 38400 (3)Init ATZ0|~AT&K6|~ (4)Prefix ATDP PreDial ~ PreInit |v``^`` LogLevel 5 LineUpdate Gong AutoBaud PollTries 10 PollDelay 600 Unattended BoxType 0 NiceOutBound ReadHoldTime 1 (5)System seudonimo_fido (6)Sysop tu_nombre StatusLog /home/fido/fnet/log/binkley.log 200 Downloads /home/fido/fnet/inbound/ CaptureFile /home/fido/fnet/log/session.log NetFile /home/fido/fnet/inbound/ Hold /home/fido/fnet/outbound/ Nodelist /home/fido/fnet/nodelist/ (7)Address 2:346/207.XX@FidoNet.org 5207 tel_del_boss (8)Key !the_passwd 2:346/207 (9)Domain FidoNet.org outbound Address 25:946/100.XX@EuroNet.org Key !the_passwd 25:946/100 Domain EuroNet.org outbound Address 93:346/101.XX@SubNet.org Key !the_passwd 93:346/101 Domain SubNet.org outbound You may start with this file. Just change what you need and take away the numbers in parenthesis. + (1), serial port you're going to use 1 COM1, 2 COM2, etc... (*) + (2), port speed, 19200 if it's a 16450 + (3), the modem's initialization string + (4), the prefix for your bbs, e.g.: ATDP (pulses) o ATDT (tones) + (5), your nickname as it appears on the pointlist, w/o the _ + (6), your name as it appears on the pointlist, w/o the _ + (7), your main fido address fakenet bbs_telefone_number + (8), your password and the boss, don't forget to put an ``!'' as a prefix to your password. + (9), Subdomains, if you have some, handle them following the same rules as your main domain. + (*) You may use 5, which will open /dev/modem. Normally /dev/modem is a symlink to /dev/cua0 or /dev/cua1, (ln -s /dev/cua1 /dev/modem). At least I have it this way... 3. Include the following line in your ~/.profile export BINKLEY=/usr/lib/binkley do . ~/.profile (you need to do this just now. The next time you enter as fido you'll already have BINKLEY initialized) 4. Execute bt 4.2 Problems If you run into troubles, for sure it's about permissions or a badly defined path. Check them out. 1. The most common error is: cannot re-open logfile The owner is usually: usuario.uucp. The permissions: 664 2. Another rather common error: Here it might be that the assigned tty doesn't have the appropriate permissions. Specially if this had been used by getty, normally it should get permissions to read and write for everybody. The message was: tty port can not be initialized Solution: chmod 666 /dev/ttyS0 or ttyS1; (COM1: or COM2:). 3. For RedHat users: ln -s /var/spool /usr If you get a screen similar to frodo you could do the following: ALT-Y, call your bbs, it'll leave your mail there and fetch what you got. Then you just need to execute the commands mentioned for mail handling. If it appears to have fallen asleep during the FIRST file transmission, hit the ESC key to wake it up. 4.3 ``Templates''. This is my templates file $FNET/msgbase/template: #if to (AreaMgr|FileScan) #; #; ********** Handling of AreaMgr- and FileScan-Mails ********** #; #else #if group (--InterNet--) #; #; ********** Handling of Internet-Mails ********** #; How are you #1E! #if mode (reply) In <#a> #f wrote: #. #quote #else #. #endif Greetings, Manu #|insertfortune #else #; #; ********** Handling of other Mails ********** #; Hi #1E! #if mode (reply|forward) #if mode (netreply) That happy day #d, #f said to #e in #a concerning "#s": #. #quote #endif #if mode (^reply) On #d, #f would write to #e concerning "#s": #. #quote #endif #if mode (forward) Even if it doesn't look like, it's a forward * Message from #f to #e * on #d to #t * concerning "#s" * in #a ,,, (o o) ---------------------------------oOO--(_)--OOo------------------------------ #text ---------------------------------------------------------------------------- #endif #else #. #endif #if group (--Intern--|^$) #if from Manuel Soriano Bye, Manu #|insertfortune #else Bye, #1F #endif #else Bye, #1F #endif #endif #endif \|/ 0-0 dpsys10@dapsys.ch *****---oOo-(_)-oOo---********************************************** * Manuel Soriano * El Perello/Valencia/Spain * Once created your area directories, you can create an origin file in each of them, and insert one or several lines (but not more than 70 chars) referring to your message's origin. _________________________________________________________________ Previous Next Table of Contents Previous Next Table of Contents _________________________________________________________________ 5. Messages, collaborations, tricks >From here on I'll state things I received from fido users. 5.1 futility ------------------------------------------------------------------------------ Message Number 1 from area R34.LINUX ------------------------------------------------------------------------------ From: Jesus Gambero (2:345/201.3) From: All Subj: FEddi Send: 25 Nov 95 15:43:57 ------------------------------------------------------------------------------ Hi. For now, FEddi hasn't got too much documentation, so after a couple of tests, finally I'm able to maintain the message base. futility tool delete "age+15&&protect-&&new-" R34.LINUX futility pack This will delete the messages older than 15 days which are not protected and which have been read. If you don't specify the area name, it'll refer to all. It happens that I leave some areas more days than others, so I have to specify a line for each area, but my customize it at will. Bye. --- FEddi 0.9pl5 via BinkleyTerm * Origin: Message written and send by Linux, of course!! (2:345/201.3) 5.2 File request (FREQ). ------------------------------------------------------------------------------ Message Number 4 from area R34.LINUX ------------------------------------------------------------------------------ From: Javier Hernandez (2:346/207.48) From: ALL Subj: FILE REQUEST Send: 07 Dec 95 06:15:45 ------------------------------------------------------------------------------ Hi! I have been trying to find out how to do the RE: with the Linux software, and I already fetched my first file. I'll explain how I did it, just if anybody is interested, or knows about a more correct manner. First I write a Net, usually to my sysop. After finishing I exit with (Alt+x). Having the message activated, I hit (Alt+g) to open a small window which displays some data. Once seeing it, I pulse `Inc' and type the name of the file I wish to download. Finally I push `Esc'. This should be enough. Next time you call you'll receive the file. At least this is how it worked for me. Any comments? Bye, Javier fjherna@ibm.net _\|/_ ***********************************************-----(O)---**** * Javi(Canary) * Valencia/Spain * --- FEddi 0.9pl5 via BinkleyTerm * Origin: RAMERA: persona que comercia con su RAM. (2:346/207.48) 5.3 Frequent addresses. ------------------------------------------------------------------------------ Message Number 6 from area R34.LINUX ------------------------------------------------------------------------------ From: Javier Hernandez (2:346/207.48) From: Manuel Soriano Subj: Testing send. Send: 11 Dec 95 23:58:55 ------------------------------------------------------------------------------ Hi Manuel! As of 07 Dec 95, Manuel Soriano wrote to Javier Hernandez concerning "Testing send.": MS> I've received it correctly, in the write area, just tell us how you MS> did it. Hope you'll write us a feddi.howto :-) See, I put a file called "names" into /home/fido/fnet/msgbase which might be similar for you. The file's contents: -------------------------start here------------------------------------- *fj,Javier Hernandez,2:346/207.48 *fm,Francisco Moreno,2:346/207.1 *ap,Alfonso Perez-Almazan,2:346/207.2 *vk,Viktor Martinez,2:346/207.4 *sz,Salvador Zarzo,2:346/207.6 *el,Eduardo Lluna Gil,2:346/207.8 *bs,Bernardino Soldan,2:346/207.10 *ms,Manuel Soriano,2:346/207.14 *js,Jose Luis Sanchez,2:346/207.17 *jv,Jose Villanueva,2:346/207.28 *am,Alberto Mendoza,2:346/207.44 *pe,pepsales@portables.com,2:342/3 *am,areamgr,2:346/207 *rt,rtorres@gimn.upv.es,2:342/3 ----------------------------stop here----------------------------------- This causes that, inserting a net instead of writing a To:, push PgUp or PgDown, you can see the different names. As you see, I've even added some Internet addresses which I'm using sometimes. The first field, I think, is some kind of short keys to make a call directly to this line. I don't remember right now how is this done, but it's easy and you'll find it in the man page for feddi. I don't know if I missed something. If you agree, just add it to feddi.como. Let me know if you think there is missing something, I'll send it to you. See ya. Bye, Javier fjherna@ibm.net fj.chicha@p48.europa3.encomix.com _\|/_ ***********************************************-----(O)---**** * Javi(Canary) * Valencia/Spain * --- FEddi 0.9pl5 via BinkleyTerm * Origin: RAMERA: person dealing with his RAM. (2:346/207.48) 5.4 Scripts and tools. ------------------------------------------------------------------------------ Message Number 11 from area R34.LINUX ------------------------------------------------------------------------------ From: Jose Carlos Gutierrez (2:341/45.17) From: all Subj: Feddi-como, Scripts Send: 26 Dec 95 11:42:31 ------------------------------------------------------------------------------ Hi These are the files I'm using to automate mail. file /usr/local/bin/fido #!/bin/bash pushd ~/fnet/inbound .minusculas if [ -f snetlist.a* ] || [ -f subptlst.a* ] || [ -f region34.l* ] || [ -f ptlstr34.l* ]; then ~/fnet/nodelist/compilar fi ftoss futility link fmbedit fscan futility pack popd |------------| file ~/fnet/inbound/.minusculas (the dot is to avoid that it converts itself to lower case) #!/usr/bin/perl while ($nombre = <*>) { $nuevo_nombre = $nombre; $nuevo_nombre=~ tr/A-Z,Ñ/a-z,ñ/; print "$nombre -> $nuevo_nombre \n"; rename($nombre,"$nuevo_nombre"); } |------------| file ~/fnet/nodelist/compilar #!/bin/bash # file to compile the nodelist pushd ~/fnet/nodelist if [ -f ~/fnet/inbound/ptlstr34.l* ]; then rm ptlstr34* unpack ~/fnet/inbound/ptlstr34.l* fi if [ -f ~/fnet/inbound/region34.l* ]; then rm region34* unpack ~/fnet/inbound/region34.l* fi if [ -f ~/fnet/inbound/snetlist.a* ]; then rm snetlist* unpack ~/fnet/inbound/snetlist.a* fi if [ -f ~/fnet/inbound/subptlst.a* ]; then rm subptlst* unpack ~/fnet/inbound/subptlst.a* fi # what I'm doing here is insert the line of my Boss for him to call the bt # with ctrl + y (this is probably the most difficult way to do it, by I know # of no other). grep -i -B 4000 'Boss,2:341/45' ptlstr34.* > /tmp/file1 grep -i -A 4000 'Boss,2:341/45' ptlstr34.* > /tmp/file2 grep -v 'Boss,2:341/45' /tmp/file2 > /tmp/file3 rm ptlstr34.* cat /tmp/file1 > ptlstr34 # you'll have to adapt this line to your system echo ",0,Ma~ana_Remoto,Madrid,Rafa,34-1-6463023,9600,CM,V34,VFC" >> ptlstr34 cat /tmp/file3 >> ptlstr34 rm /tmp/file1 rm /tmp/file2 rm /tmp/file3 # rm -f ~/fnet/inbound/ptlstr34* rm -f ~/fnet/inbound/region34* rm -f ~/fnet/inbound/snetlist* rm -f ~/fnet/inbound/subptlst* rm fnlc.* fnlc popd Bye, Guti. --- FEddi 0.9pl5 via BinkleyTerm * Origin: THE GANG TM (2:341/45.17) 5.5 Automation: The personal area. ------------------------------------------------------------------------------ Message Number 1358 from area R34.LINUX ------------------------------------------------------------------------------ From: Pablo Gomez (2:341/43.40) From: All Subj: The personal area in FEDDI, a fine(ally) version ;-) Send: 24 Jun 96 00:35:31 ------------------------------------------------------------------------------ Hi! Will since some time we have been trying to find out a possibility to provide in FEDDI a personal area allowing the reception of mail directed to us from any area, and, over all, (as the former isn't difficult) reply them in a comfortable way, sending them back to the original areas. The following scripts at least allowed Francisco Jose Montilla and the author of this message to do the trick. The first step is creating an area which will later serve as PERSONAL. We can do it like: (As user fido) $ cd ~/msgbase $ mkdir +PERSONAL $ cp +R34.LINUX/* +PERSONAL/ (PERSONAL is the name you want to give the personal area) Check if the permissions and the owner of this new directory are the same as those you have in other areas. If not, correct them. Next, to clean the messages, do: $ futility "+delete" "all+" PERSONAL $ futility pack PERSONAL If you invoke fmbedit again, you'll the the new area, called PERSONAL! :-) magic? :-) Now we've got the base. Next part: Copy the new messages that are arriving to the system to our name. This is done (almost) automatically. If we create a file like: ,,, (o o) File: ~/msgbase/tosspath ---*reiss*------*schnippel*------oOO--(_)--OOo-------*knabber*-----*fetz*--- copy t"Pablo Gomez" PERSONAL ---*reiss*------*schnippel*--------------------------*knabber*-----*fetz*--- that's it. Obviously you'll have to replace my name (Pable Gomez) with yours, and PERSONAL with the name of your personal area. Each time we run ftoss, this will copy to the personal area the messages directed to us. This point deserves a comment. In fact, this will copy also the messages directed to us and received in NETMAIL. In my opinion, this is somewhat brain-dead, as the NETMAIL area is already our personal area. I don't know of no modification to avoid this copy. So a little later we'll have to make a certain adjustment. This is a piece (the important one ;-)) of the script I run to receive the mail. ,,, (o o) File: ~/bin/mimport ---*reiss*------*schnippel*------oOO--(_)--OOo-------*knabber*-----*fetz*--- #!/bin/sh # To manage the personal area PERSAREA=PERSONAL # Mail import ftoss # # Feeding personal area # We just have delivered the messages, generating the necessary duplicates in # PERSONAL. But we'd liked to delete the messages which we just copied to # the PERSONAL area, and which come from the NETMAIL area # futility tool "+delete" \ "new+&&text+\*\*\* ftoss: copied from NETMAIL" $PERSAREA # reconstruct threads futility pack futility link #[...] ---*reiss*------*schnippel*--------------------------*knabber*-----*fetz*--- Be careful: the lines `futility tool ...' and `new ..." are just one. The aim is to delete this redundant messages from NETMAIL. Going on with message handling. The messages in the PERSONAL area contain lines like: *** ftoss: copied from R34.LINUX (for instance) :-) I reply (just in the PERSONAL area) the message, and don't care for anything, _EXCEPT_ to not delete this line, which will serve later as a `witness' to allow the message be replied in the correct area. Then, exporting the mail, I run the following script: ,,, (o o) File: ~/bin/mexport ---*reiss*------*schnippel*------oOO--(_)--OOo-------*knabber*-----*fetz*--- #!/bin/sh USER_BIN_DIR=/home/fido/bin LOCAL_BIN_DIR=/usr/local/bin # Name of personal area PERSAREA=PERSONAL # user name USERNOM="Pablo Gomez" # temp output file name OUTFILE=/tmp/persanswr # Extraction of the messages in the personal area which are due for process # and which will then be marked as `sent' # futility tool "display" "attribute-se&&from+Pablo Gomez" $PERSAREA > $OUTFILE futility tool "+se" "attribute-se&&from+Pablo Gomez" $PERSAREA # distribution to the new areas... awk -f $USER_BIN_DIR/persreply.awk < $OUTFILE # scan the message base # $LOCAL_BIN_DIR/fscan ---*reiss*------*schnippel*--------------------------*knabber*-----*fetz*--- And the `awk' line included in the file persreply.awk reads: ,,, (o o) File: ~/bin/persreply.awk ---*reiss*------*schnippel*------oOO--(_)--OOo-------*knabber*-----*fetz*--- BEGIN { # # Touch this if necessary # ATTENTION: Watch also for instruction blocks marked with "####": # these too will need adjustment. # outputfile="/tmp/tmpreply" # # # down here I suppose only the blocks marked with `###' my need changes # borracmd=sprintf("rm -f %s", outputfile) replyarea="" estado=1 system(borracmd) } # It's only valid the first time found in each message. # Avoid copying, so it won't reach another system which is using the same # system /\*\*\* ftoss: copied from /{ if (estado==1) { viejoestado=2 estado=3 replyarea=$NF ### Modify: print "*** pers_area: Copiado desde area PERSONAL" >> "/tmp/tmpreply" } } /^#To: / { user="" for (n=2; n <= NF; n++) { user=sprintf("%s %s ",user,$n) } } # Avoid writing the following lines: /^#Area: / { viejoestado=estado estado=3 } /^#@To: / { viejoestado=estado estado=3 } # always but in the before mentioned cases... estado != 3{ ##### # # ATTENTION!: Modify as above. # Sorry for the hack, but I couldn't make it work otherwise. # print $0 >> "/tmp/tmpreply" } # Restore the previous state estado==3 { estado=viejoestado } /^###MESSAGE_END###/{ if (estado==2) { close (outputfile) comando=sprintf("cat %s | futility addmsg %s",outputfile, replyarea) system(comando) system(borracmd) estado=1 replyarea="" } } END { system(borracmd) } ---*reiss*------*schnippel*--------------------------*knabber*-----*fetz*--- Be careful: there are cut off lines (visibly), and there is a double hack which I wasn't able to resolve better. Instead of defining all of the above variables, there is one, `outputfile' which I had to redefine half way of the script as a constant, because I didn't know how to do it better. I tried to pass the variable quoted in different styles, but I couldn't achieve it. Maybe one of you could give me a hint. This was tested with several simultaneous messages, but I think I never failed to destroy the line with ***ftoss... Regards until the next time. I hope you'll find it useful. I'll be pleased to get comments, improvements, etc. Bye, Pablo GOMEZ pgomez@p12.laereas.encomix.com --- FEddi 0.9pl5 via BinkleyTerm * Origin: Puntomatico Remoto. Linux en Hoyo de Manzanares (2:341/43.40) 5.6 A few `tricks' for those that don't agree with RTFM. REPLYING MAIL. * To reply -in the normal way- the From: in the same area, Alt+r * To reply the To: in the same area as the message: Ctrl+r. * To reply -via net- the message's From: Alt+n * To reply -via net- the message's To: Ctrl+n To be able to do the latter, the addressee must be in the pointlist, otherwise just nothing happens. ``NAVIGATING'' AROUND THE MESSAGE BASE. * To get a list of the areas messages, pulse Alt+l; using then the cursor right key, you'll be changing to the list of areas. * To follow the conversation's thread upon it's Re:, you'll need to hit the Tab key, and see a list similar to that which appears in the previous item. If you continue using this key you'll change the references to the linked messages. You will know that there multiple linked messages (this is what futility link does) by one and the same Re: and by some yellow codes which appear in the right upper corner of the screen, in the zone dedicated to the message's header. FILE OPERATIONS * To do a File Attach, or sending a file ``attached'' to a message, netmail, -once the addressee has been typed- push Alt+y, followed by f; then Alt+j and finally Tab; you'll be able to ``navigate'' up to the file. The latter Tab applies to all operations related to files(insert file, export message to file, etc...) 5.7 Grouping by tens Binkley's appearance: * Create the following file and execute it in place of the bt: File /usr/bin/bbs echo -e "\033(U" /usr/bin/bt echo -e "\033(B" * Type the command: chmod 755 /usr/bin/bbs * Edit /usr/lib/binkley/binkley.cfg changing the value of the line BoxType to 3: [...] BoxType 3 [...] _________________________________________________________________ Previous Next Table of Contents Previous Next Table of Contents _________________________________________________________________ 6. Good bye and conclusion. Well, that's all, have fun, and we'll read about us via fido. Don't forget: Send me comments, modifications you have to this soft, but send flames to /dev/null :-) Bye, Manu _________________________________________________________________ Previous Next Table of Contents "Linux Gazette...making Linux just a little more fun! " _________________________________________________________________ Welcome to the Graphics Muse Set your browser to the width of the line below for best viewing. Copyright © 1996 by mjh _________________________________________________________________ Button Bar muse: 1. v; to become absorbed in thought 2. n; [ fr. Any of the nine sister goddesses of learning and the arts in Greek Mythology ]: a source of inspiration W elcome to the Graphics Muse! Why a "muse"? Well, except for the sisters aspect, the above definitions are pretty much the way I'd describe my own interest in computer graphics: it keeps me deep in thought and it is a daily source of inspiration. [Graphics Mews] [Musings] [Resources] indent T his column is dedicated to the use, creation, distribution, and discussion of computer graphics tools for Linux systems. My first column, in the November issue of Linux Gazette, left something to be desired in both content and graphics. As one reader pointed out, I didn't even follow my own guideline for making background images. Well, it looked good on my system at home. The problem was one of poor time management on my part. I finished up the chapters of a web server book I'm co-authoring at the end of September, so I had more time to work on this months column. Hopefully the format is cleaner and the content more informative. indent And, in the future, I'll try to follow my own guidelines. vertical space Graphics Mews Disclaimer: Before I get too far into this I should note that any of the news items I post in this section are just that - news. Either I happened to run across them via some mailing list I was on, via some Usenet newsgroup, or via email from someone. I'm not necessarily endorsing these products (some of which may be commercial), I'm just letting you know I'd heard about them in the past month. indent New version of Pro MovieStudio driver available on Sunsite archives indent Wolfgang Koehler has released the 3.0 version of his PMS-grabber package to the sunsite archives. This package provides a driver and X application for grabbing frames from the Pro MovieStudio (aka PMS) adapter by Mediavision. Depending on when it is migrated to its final resting place, the package can be obtained either from ftp://sunsite.unc.edu/pub/Linux/incoming or ftp://sunsite.unc.edu/pub/Linux/apps/video. indent indent ImageMagick Library updated indent A New revision of the ImageMagick Library, version 3.7.7, was released this past month. indent Netscape Tcl Plugin released indent The Tcl Plugin 1.0 was also released this past month. This is a Netscape plugin that allows web page authors to write Tcl based applets for your web pages. indent indent Digigami looking for testers for MovieScreamer tool indent There is now a conversion tool for creating Quicktime videos. Digigami is looking for Unix Webmasters to be Beta testers for its MovieScreamer multi-platform, 'Fast-Start' publishing and conversion tool for QuickTime(tm) movies. 'Fast-Start' QuickTime movies are standard 'flattened' movie files that have been 're-organized' for playback over the Internet (or corporate Intranets). indent indent indent Did you know? indent indent There is a font archive, complete with sample renderings of the fonts, available at http://www.ora.com/homepages/comp.fonts/ifa/os2cdrom/index.htm? The ftp site for the fonts is at ftp://ftp.cdrom.com/pub/os2/fonts/. indent A large list of general graphics information is available at ftp://x2ftp.oulu.fi/pub/msdos/programming/. Look under /theory, /math, /faq and a host of other subdirectories. There is a lot to wade through, but just about all of it has some value, including information on shading and object sorting. indent The Bare Bones Guide to HTML is a useful resource for people who need to find the correct HTML syntax for HTML 3.0 or Netscape based web pages. indent indent indent Musings O'Reilly releases The Linux Multimedia Guide. indent I recently picked up my copy of The Linux Multimedia Guide by Jeff Tranter. This text covers a wide range of material related to the creation and use of multimedia files with respect to the Linux operating system. The text is approximately 350 pages, including source code listings for a number of sample multimedia applications which are discussed in one chapter of the book. As usual, O'Reilly provides copies of the source from their ftp site. indent When I first found out about this book I thought "Rats, Jeff beat me too it." Much of what Jeff covers is listed in my own Linux Graphics mini-Howto. However, there are quite a number of items not covered by the LGH (as I call it), such as audio, a bit more detail about video formats and tools, and programming considerations for various hardware (CD-ROMs, joysticks, and sound devices), which make the Linux Multimedia Guide a good addition to the O'Reilly family of Unix books. indent The text is divided into 5 sections: 1. Introduction to Multimedia 2. User's Guide 3. A Survey of Multimedia Applications 4. Multimedia Programmer's Guide 5. Appendices The first section introduces the reader to the various concepts involved with multimedia such as the CD-ROMs, image file formats, and sound files. The chapters here are generally brief but the one on audio is quite informative. There is a discussion on audio file formats as well as a comparison of a few of the popular sound cards available for Linux. indent Section two opens with a discussion on hardware requirements for doing multimedia on Linux systems. Most of this section centers on either the CD-ROM driver or the Linux Sound Driver (now known as OSS). There is also a short chapter on the joystick driver. indent The second longest section, A Survey of Multimedia Applications, covers applications for the various forms of multimedia. There are chapters on sound and music applications, graphics and animations applications, hypermedia applications, and games. The last chapter, on games, seems a bit out of place. There are games implemented as network applications using Java, JavaScript and the new Tcl/Tk plug-in for Netscape but this chapter doesn't cover these. This section is very similar to the LGH in that the chapters provide the program names and URLs associated with them (if any). The number of items covered is less than the LGH, but there are better descriptions of the applications in the book. indent Chapter fourteen opens the fourth section, the Multimedia Programmer's Guide. This section is the longest in the book and covers all the devices discussed earlier. Other chapters in this section cover some of the available toolkits available to multimedia developers. There is one chapter which contains three sample applications. indent In general I find the Linux Multimedia Guide a good reference text with a moderate degree of developer tutorials. Unlike many of the books available for Linux this text provides detailed explanation on the various programming interfaces, a useful tool beyond the simple "what is this and where do I get it" that many of the Howto's provide. The only drawback that I can see is that, like most of other Linux texts, this text does not provide a users perspective on any of the tools listed. If Linux is to ever go beyond a developer's-only platform there will need to be detailed users guides for the various well known applications. indent indent indent More Musings... * Creating GIF Animations indent indent indent Textural Creations indent N ot long ago I got email from a reader of my Unix Graphics Utilities page asking this: I am just getting into the graphics scene and I have POV-Ray (for linux) and a few other programs. I know how to create an image with a modeller but how do apply texture and color to it? My answer was simple enough: It depends on what modeller you use and what renderer you use. POV-Ray for Linux doesn't have a modeller. You have to feed it a text file which contains both shapes and textures and POV-Ray will render (draw) it. There are 4 modellers that I know of for Linux: AC3D, AMAPI, SCED, and Midnight Modeller. SCED allows you to preview your image using various renderers. AC3D has a built in renderer, as does AMAPI. All three will output files that can be used by a number of renderers (such as POV-Ray, Radiance, PolyRay, RIB formats, etc). Modellers create shapes that are independent of the tools used to render the image. indent Modellers are great for creating shapes, but the textures applied to those shapes depend on what renderer you use. POV-Ray has its own set of commands that it uses for determining how a texture will look on an object in a scene. Commands for creating textures are different for other systems, like the procedural language (an actual programming language) used by BMRT (which conforms to the Renderman specification - i.e. the formats used by Pixar and their tools). indent So, the answer to the question is: it depends on what renderer you use. For POV-Ray you need to learn the command syntax for describing textures. If you can find a copy, pick up "Ray Tracing Creations" 2nd edition by Chris Young and Drew Wells. It may be out of print. This text has a good reference for the 2.2 version of POV-Ray. Although the texture commands were expanded for the 3.0 version, you can still create 2.2 based textures by providing the "#version 2.2" command in your POV-Ray source file. In this way you have a handy reference for learning how to create textures in POV-Ray. You still have to do this by hand, though. I've heard rumors that there may be a 3.0 text eventually, but I don't have any word if that is true or not. indent As far as setting the textures from within the modeller, well, I don't think any of the modellers do that for you. You still have to manually set the textures (SCED allows you to do so from within the modeller, but I'm not sure the others do) using the command language of the particular renderer you're using. The reason for this goes back to what I said earlier: the format of the texture commands depends on what renderer you use. indent Its best to think of modelling and rendering as two separate tasks. If you want to preview your models you still need to run the renderers separately (except for SCED which will launch the renderer for you, but its still a separate program - the renderer is not part of the modeller). indent I know this is confusing. It was for me too. In fact, I gave up on modellers and now create my images by hand (I use vi to edit the .pov and .inc input files for POV-Ray). I've only recently started to look seriously again at modellers. Resources The following links are just starting points for finding more information about computer graphics and multimedia in general for Linux systems. If you have some application specific information for me, I'll add them to my other pages or you can contact the maintainer of some other web site. I'll consider adding other general references here, but application or site specific information needs to go into one of the following general references and not listed here. Linux Graphics mini-Howto Unix Graphics Utilities Linux Multimedia Page Future Directions Next month: * What I use the Gimp for - a users story * The IRTC - A raytracing competition for the fun of it * Review: The AC3D Modeller * Book Review: Jim Blinn's Corner - A Trip Down the Graphics Pipeline * ...and lots more! Let me know what you'd like to hear about! _________________________________________________________________ Copyright © 1996, Michael J. Hammel Published in Issue 12 of the Linux Gazette _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ More... Musings * Creating GIF Animations indent Creating GIF Animations indent Recently, while working on a text on Unix web servers, I was tasked with writing about multimedia applications. During my research on this subject I discovered a little known fact about the GIF image file format: it supports multiple images in a single file which can be used to create animations. Creating GIF images is fairly simple. There a number of tools available for Linux systems that can either create new GIF images or convert image files in other formats to the GIF format. Tools such as the Gimp or XPaint can be used to create images while xv or the NetPBM tools can be used to convert images from other formats. indent In order to create a GIF animation you must first create a series of GIF images. These images make up the frames of the animation, much like cell animations make up a cartoon (although there is no reason why your GIF files can't be converted from 3D images such as those created with POV-Ray or BMRT). The animation only plays as fast the the host machines ability to read, decode and display the individual frames. On older 486 systems this might be a problem so its wise to keep your images small. For GIF images this means keeping the dimensions (height and width) of the animation small. You should also consider how jumpy you want the animation to be. Small amounts of movement of objects from frame to frame will reduce the jumpiness of the overall animation, but it also can significantly increase the overall size of the GIF file. Since Netscape (the only browser that I know of that currently supports this type of animation) tries to load the entire GIF file before it begins playing the animation it would be wise to consider keeping the file size small. indent Once you have the individual frames created, you'll need to put them all into a single GIF file. You can use a nifty little tool called WhirlGIF to do this. WhirlGIF is a command line tool (no GUI) that concatenates the series of GIF images into a single GIF image and configures the GIF header so that Netscape will know how to play the animation. The GIF header allows for a number of options, including some that are Netscape specific (Netscape didn't create their own format - the GIF format allows for application specific extensions). You can provide the number of times to loop the animation and the delay time to use between frames (which can be used to slow down and animation if so desired). indent There is a terrific page devoted to GIF animations at http://members.aol.com/royalef/gifanim.htm. This page is not Linux (or Unix) specific, but it does include pointers to WhirlGIF and the information in a number of the pages there are very applicable to creating GIF animations on Linux systems. indent _________________________________________________________________ Copyright © 1996, Michael J. Hammel Published in Issue 12 of the Linux Gazette _________________________________________________________________ "Linux Gazette...making Linux just a little more fun! " _________________________________________________________________ InfoZIP Archive Utilities By Robert G. "Doc" Savage, dsavage@accessus.net _________________________________________________________________ I'm a big fan of utilities. When I saw that CND/RHS were distributed with older versions of the InfoZIP zip/unzip suite of archive utilities, I made upgrading them my first Linux project. It turned out to be a little bit more complicated than I thought it would be. I especially wanted to add in the DES encryption modules to zip/unzip so they would be 100% file compatible with PKWare's archivers for MS-DOS. U.S. State Department rules make it difficult to implement this as an RPM, so I decided to do it as a classic shell script. The end user will have to ftp the source code (especially the DES code module) from the site specified in the script. _________________________________________________________________ Script #1: #!/bin/sh # # undatezip reverses updatezip and restores a Caldera Network Desktop v1.0 or # Red Hat Software v2.1/v3.0.3 InfoZIP suite installation to its original zip # v2.01 and unzip v5.12 configuration. This should only be necessary if you # need to upgrade from a pristine as-installed configuration. # # original versions >>updatezip >>> new versions # without encryption __________________________________________________________________________ Script #2: #!/bin/sh # # updatezip is a shell script for Caldera Network Desktop v1.0 or Red Hat # Software's v2.1/v3.0.3 distributions to upgrade the InfoZIP utilities unzip # from v5.12 to v5.2, and zip from v2.01 to v2.1. It also adds the zcrypt DES # encryption module not provided in the RHS (or any other) distribution. # # To undo this upgrade and restore a CND v1.0 or RHS v2.1/v3.0.3 installation # to its original zip/unzip configuration, run the companion file undatezip. # # original versions >>updatezip >>> new versions # without encryption : # # unzip52.zip # zcrypt26.zip # zip21.zip # # Copy them and updatezip to a safe directory (suggest root's home directory # /root). Use 'chmod 700 updatezip' to make it executable, then run it. # Execution time is slightly over four minutes on a DX4/100 system with 28M # of RAM, a 32-bit EISA host adapter, and an older SCSI-1(CCS) hard drive. # # IMPORTANT # --------- # Caldera Network Desktop 1.0, when first installed, is missing an important # file required to compile certain programs. The following lines create (or # recreate) this missing file. This script will fail without it. # cd /usr/src/linux make include/linux/version.h cd # # Section 1. Create the working directory and extract all required files. # mkdir /scratch cp unzip52.zip /scratch cp zcrypt26.zip /scratch cp zip21.zip /scratch cd /scratch # # Section 2. Compile unzip first, then zip # unzip unzip52 unzip -o zcrypt26 # -o forces overwrite of stub files cp -f ./unix/Makefile . make generic rm -f *.o # clean-up before next compile round unzip -o zip21 unzip -o zcrypt26 cp -f ./unix/Makefile . make generic_gcc # # Section 3. Install new versions of the zip/unzip suite. Preserve the # existing executables and man files first. Use soft links to point # to the new versions. # cd /usr/bin mv funzip funzip383.export mv unzip unzip512.export mv unzipsfx unzipsfx512.export mv zip zip201.export mv zipcloak zipcloak201.export mv zipinfo zipinfo202.export mv zipnote zipnote201.export mv zipsplit zipsplit201.export # cd /usr/man/man1 mv funzip.1 funzip383.1 mv unzip.1 unzip512.1 mv unzipsfx.1 unzipsfx512.1 mv zip.1 zip201.1 # note there is no zipgrep.1 in this distribution mv zipinfo.1 zipinfo202.1 # cd /usr/bin mv /scratch/funzip funzip39.encrypt mv /scratch/unzip unzip52.encrypt mv /scratch/unzipsfx unzipsfx52.encrypt mv /scratch/zip zip21.encrypt mv /scratch/zipcloak zipcloak21.encrypt mv /scratch/unix/zipgrep zipgrep21.encrypt mv /scratch/zipnote zipnote21.encrypt mv /scratch/zipsplit zipsplit21.encrypt # cd /usr/man/man1 mv /scratch/unix/funzip.1 funzip39.1 mv /scratch/unix/unzip.1 unzip52.1 mv /scratch/unix/unzipsfx.1 unzipsfx52.1 mv /scratch/man/zip.1 zip21.1 mv /scratch/man/zipgrep.1 zipgrep21.1 mv /scratch/unix/zipinfo.1 zipinfo21.1 # # Now establish the soft links # ln -s funzip39.1 funzip.1 ln -s unzip52.1 unzip.1 ln -s unzipsfx52.1 unzipsfx.1 ln -s zip21.1 zip.1 ln -s zip.1 zipcloak.1 # remember, zip.1 is ln -s zipgrep21.1 zipgrep.1 ln -s zipinfo21.1 zipinfo.1 ln -s zip.1 zipnote.1 # already soft-linked ln -s zip.1 zipsplit.1 # to zip21.1 # cd /usr/bin ln -s funzip39.encrypt funzip ln -s unzip52.encrypt unzip ln -s unzipsfx52.encrypt unzipsfx ln -s zip21.encrypt zip ln -s zipcloak21.encrypt zipcloak ln -s zipgrep21.encrypt zipgrep ln -s unzip52.encrypt zipinfo # a special link ln -s zipnote21.encrypt zipnote ln -s zipsplit21.encrypt zipsplit # # Section 4. Clean up the leftovers. # cd # go to your home directory rm -rf /scratch # nothing worth saving in the scratch directory hash -r # re-sync the paths # # That's it... __________________________________________________________________________ --Doc Savage, Sr. Network Engineer, I-NET, Inc. __________________________________________________________________________ Copyright © 1996, Robert G. Savage Published in Issue 12 of the Linux Gazette __________________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next __________________________________________________________________________ "Linux Gazette...making Linux just a little more fun! " __________________________________________________________________________ SLANG APPLICATIONS FOR LINUX by Larry Ayers Copyright (c) 1996 Published in Issue 12 of the Linux Gazette __________________________________________________________________________ INTRODUCTION John E. Davis of the Center for Space Research at MIT has written an interpreted programming language called Slang, which has a C-like syntax. He has written several programs using this language, including the slrn newsreader and the emacs-like Jed editor. Lately a few other programmers have begun to make use of Slang; one reason for this is that Slang allows the use of color in a text-mode program which will display equally well in an rxvt window under X. Applications which are linked with the Slang library always seem to be text-mode programs. Typically Linux text-mode applications use the ncurses library to handle screen display. Ncurses enables the use of menus, a certain amount of color, and a more complex screen layout. These traits don't always translate well into an X-Windows environment; i.e. running in an xterm or rxvt window. If an application is linked with the Slang library instead its behavior is more consistent between the console and X sessions, especially when started from an rxvt window. __________________________________________________________________________ AN ASIDE CONCERNING RXVT AND XTERM I get the impression that the xterm terminal emulator is used more commonly than rxvt, though this may be due more to tradition than innate superiority. Rxvt has been revised several times recently and in its current form (version 2.19) has much to recommend it. One feature which I appreciate is that it's memory usage is much lower than that of xterm. Rxvt handles color requests well, both background/foreground specifications and extension-specific colorization such as "color-ls". The most recent version even allows the use of Xpm images as background, similar to a web-page, though as with a web-page a background image would have to be carefully chosen so as not to obscure the text. Some xterm variants make use of color, but some don't. I find the plenitude of xterms and color-xterms rather confusing; it's hard to tell just which ones you have, and they vary from distribution to distribution. Then there is xterm's Tektronix compatibility, which I've never seen a use for. Reading the xterm man page I get the impression that xterm was developed for older mainframe-and-terminal systems. __________________________________________________________________________ APPLICATIONS WHICH USE SLANG 1. Slrn is a fast, high quality news-reader which supports threading of messages, decoding of MIME attachments, and has the ability to tell a web-browser to load a URL contained within a message. It has many other features and options; it is one of John Davis's programs and he actively supports it in the newsgroup news.software.readers. 2. Lynx, the text-mode web-browser, looks less archaic when compiled with Slang support. If you can't see the images on a page, at least the text elements and background can be nicely colored! 3. Jed, John Davis's emacs-like editor, is surprisingly capable considering it is a fraction of the size of any real emacs. If you've ever hesitated to start up Gnu Emacs or Xemacs just to read an info page, try Jed; it reads them just as well and is quicker to invoke. Jed has syntax-highlighting for a variety of file types. 4. The Midnight Commander, the exemplary text-mode file-manager, now includes enough of the Slang files in its source distribution to compile with Slang screen management without Slang libraries on your system. Slang is the default in recent versions of MC and the two are well-matched. 5. Minicom is available in a binary, Slang-enabled version at ftp://sunsite.unc.edu. Color really makes this classic comm program more usable, especially in an rxvt window. 6. The Mutt mail program is an interesting offshoot of Elm development which is well on its way toward becoming an alternative to Pine and Elm. Slang is listed as an alternative to ncurses in the pre-compilation configure script options, but I can't say how well it works as it will only successfully compile with ncurses on my system. 7. Dosemu, though still dubbed an alpha version by the development team, is remarkably stable and useful. Recently I compiled the latest version (I had been using an old RPM version) and was surprised to see that the configure script looks for the Slang library. After the compilation I ran ldd against the dos binary and found that it is dynamically linked with the Slang library. Interesting! I looked through the source code and docs to see if there was any information on Dosemu's use of Slang, but finally gave up. You could spend days wandering around the Byzantine directory hierarchy of Dosemu! __________________________________________________________________________ I'm sure as the benefits of Slang become more widely known we shall see more text-mode applications with Slang support included. There very well be others than the above-listed out there; these are just the ones I've run across. __________________________________________________________________________ AVAILABILITY Precompiled binaries for slrn, lynx, and the Jed editor (with Slang statically linked, I assume) are available at ftp://sunsite.unc.edu and its mirrors . I used these for some time, but recently I obtained the source for Slang and compiled a shared library. The advantage of this approach is that you can compile binaries which dynamically link the Slang library at runtime. Your executables will be smaller, and one shared library can service any number of Slang-using applications. Another advantage to obtaining the source distributions is that you'll end up with more documentation. John E. Davis's creations (slrn, Jed, and the Slang sources) are available at their MIT home site. The most recent versions, as well as beta versions, can be found there. This Mexican site is the source for the most recent versions of the Midnight Commander, as well as rxvt. Beta versions (which seem stable to me) of Michael Elkins' Mutt mail program are available from this FTP site. Maybe you can get it to compile with Slang! Lynx binaries with Slang support can be found at sunsite and its mirrors. The source for the latest and greatest of the Dosemu releases can be found at the tsx-11 FTP site. (Version 0.64.1 was released in November). __________________________________________________________________________ If you're like me and work at the console often, you'll find it's nice to have applications available which work well (and look good!) in an X session too. I think you will be pleased with the high quality and low memory usage of the above-listed apps. __________________________________________________________________________ Larry Ayers Last modified: Thu Nov 21 13:43:51 CST 1996 __________________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next __________________________________________________________________________ "Linux Gazette...making Linux just a little more fun! " __________________________________________________________________________ UPDATES TO MY PAST REVIEWS by Larry Ayers Copyright (c) 1996 Published in Issue 12 of the Linux Gazette I've been writing these short reviews and other articles for the Gazette since issue number seven. Even with the short lead time inherent in a WWW-based publication it seems like new releases and URL changes often happen right after I submit an article. The status of several of the programs has changed since I wrote of them, so I thought I'd take this opportunity to list some of these changes. By the way, I appreciate all of the email I've received in response to my articles; feel free to write if you have any comments or criticism. * Moxfm hasn't been updated since version 1.00 was released several months ago, but it's working well in the current version. I've included it here because I've received several e-mail messages stating that the URL in the article wasn't working. The current URL of the Moxfm home page is http://sugra.desy.de/user/mai/moxfm. The source and binaries are in uuencoded form; just let them display on the screen of a web-browser, save them to a file, then run "uudecode filename.uue". * TkDesk has been through several versions since I wrote of it; the current one is 1.0b3, released on Sept.25, 1996. There are many new features; one which I use often is a file pop-up menu-item which hands the file over to a running Xemacs on another desktop. There is a similar capability involving HTML files and Netscape (or another browser). Check out the TkDesk web-page for the latest news. * FileRunner has been developing rapidly in the past few months. It's current version is 2.1.1, and many refinements have been made. It's FTP capabilities have been greatly improved; FTP downloads can now run as a background process, and directories can be displayed date-ordered. Remote FTP directories can be saved as bookmarks (accessible from a menu). Many configuration options have been added as well. The built-in shell windows which follow you from directory to directory are very handy. They allow you to see the output of non-interactive commands (such as compilation) and can be dismissed when not needed. There is a FileRunner web-page from which the source can be obtained, as well as from the Sunsite archive.. * Elvis has finally made it to a major release; version 2.00 was announced recently. It's available from its official site. Elvis is remarkable for the small size of the compiled binary, considering how powerful an editor it is. Elvis's ability to display HTML in a readable form meshes well with the HTML format of the extensive help files. * Vile has a new official maintainer; Paul Fox has handed over the reins to another of the primary Vile developers, Thomas Dickey. Version 6.2 was released recently. For some reason Vile doesn't seem to be as popular as Vim and Elvis (judging by news-group postings). I urge anyone who favors vi-style editors to give it a try; it really grows on you. Vile's new official release site is ftp://ftp.clark.net/pub/dickey/vile * XaoS in its current release (1.2) has a new feature which increases its usability for those running X with more than 256 colors: it'll run! Previous releases only worked on 8-bit displays. * Yodl version 1.08 has been released; it's mainly a "small-bug-fix" release, i.e. if a previous version does what you want, you probably don't need it. It's at sunsite. * Procmeter has been updated to version 2.2; changes include a choice of solid or bar-type graphing, and refinement of network packet transfer display. * Xmosaic development has slowed since my review. The powers-that-be at the University of Illinois have decided that Xmosaic will be the second, rather than the first, GTK (Graphics ToolKit) client to be developed.The GTK is a programming toolkit which will take the place of Motif in Xmosaic. Scott Powers, Xmosaic's project leader, stated in a message to the mailing list that development will resume in a couple of months. The up side to this news is that the GTK is the "hard part", so that once work resumes on Xmosaic development progress should be rapid. __________________________________________________________________________ Larry Ayers Last modified: Wed Nov 20 17:41:22 CST 1996 __________________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next __________________________________________________________________________ "Linux Gazette...making Linux just a little more fun! " __________________________________________________________________________ THE YARD RESCUE DISK PACKAGE by Larry Ayers Copyright (c) 1996 Published in Issue 12 of the Linux Gazette __________________________________________________________________________ INTRODUCTION It is a common practice to use the rescue/boot disks supplied with a Linux distribution if filesystem problems occur and you need to boot from a floppy. Typically these disks consist of a bootable compressed kernel on disk 1, with the second disk containing basic maintenance tools such as fsck. On the few occasions I've had to boot from such disks the transition from my familiar Linux environment to the bare-essentials, limited boot-disk system (constrained by the size of a floppy disk) has been disconcerting, to say the least. Typically if an editor is available it's a small one with which I've never worked, and many of the tools I'm used to having around aren't there. Recently Tom Fawcett has been refining a suite of customizable Perl scripts which make the creation of boot-disks from scratch easier. YARD (for Yet Another Rescue Disk) makes use of (and requires) the optional Linux kernel compressed ramdisk option, which allows you to load a compressed disk image into memory at boot-up. Paul Gortmaker has written a lucid explanation of the new ramdisk options in the file "ramdisk.txt", which is in the Documentation subdirectory of recent kernel source releases. INSTALLATION AND USAGE The Yard distribution contains two files which need to be edited as a first step. Config.pl is a Perl script which sets such preferences as the type of floppy you're using and whether you are making a single boot-disk or a double. The Bootdisk_Contents file contains a list of all of the files and utilities you would like on your disk(s). This file needs to be edited heavily, as it includes much more than will fit on even two disks. Anything you like can be included in this file. The next step is to run the Perl script make_root_fs. This script gathers up all of the files you've specified (as well as all libraries upon which they depend) and constructs a root filesystem upon whichever device was specified in the Config.pl script. A ramdisk works well. The new filesystem is then compressed with gzip into a single file in your /tmp directory. Once this process is complete yet another Perl script, check_root_fs is run, which makes sure that all needed libraries,etc. are present. After all of this preparation you're ready to actually write the rescue disks; here's where you find out if you've attempted to cram too much into them. The write_rescue_disk script first copies your compressed kernel (vmlinuz) onto the disk (the first disk if it's a two-disk set) and then copies the compressed filesystem image you've constructed onto whatever is left. It took me several tries to pare down what I wanted Initially on the disks to what would actually fit. The virtue of the Yard system is that all you need to do to try again is re-edit the Bootdisk_Contents file and re-make the filesystem. Yard also writes log-files which can be helpful in diagnosing problems. Modular kernels are great, but if you boot a kernel image and a capability you need is a demand-loaded module you're out of luck. Yard sidesteps this potential problem by including your modules directory in the compressed filesystem, as well as making sure that the kernel-daemon /sbin/kerneld is started at boot-up. The result of this process is a customized miniature Linux system. It's a nice feeling to know that if your filesystem is in shambles due to a power outage or a beta program run amuck that you at least have familiar tools available. Once you've managed to edit a set of Yard configuration files which will successfully write working rescue disks, consider saving copies of these files in case the disks become corrupted. I just replaced the supplied files with my edited copies, then tarred and gzipped the Yard distribution and saved it to floppy. CAVEATS Yard gives you the option of using or not using Lilo to boot your disks. I first tried Yard with Lilo, as Lilo has always worked well for me. It wouldn't work with my Yard disks, so I disabled that option. I'm using an old version of Lilo, left over from my original Slackware 3.00 Linux installation, which may explain this failure. Yard works fine without it. Lilo might be necessary if you need to include parameters in order to boot your system, such as those required for some SCSI hard disks. AVAILABILITY Yard is available from the Yard home-page, as well as from the sunsite archive and its mirrors. It's well worth trying if you want the ultimate in control over just what is included on your rescue disks. __________________________________________________________________________ Larry Ayers Last modified: Wed Nov 20 09:21:50 CST 1996 __________________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next __________________________________________________________________________ "Linux Gazette...making Linux just a little more fun! " __________________________________________________________________________ RECENT LINUX CONFERENCES __________________________________________________________________________ CONTENTS: * Unix Expo 1996, by Lydia Kinata * DECUS in Anaheim, by Phil Hughes * Open Systems World/FedUNIX, by Gary Moore __________________________________________________________________________ Unix Expo 1996, October 8-10 in New York By Lydia Kinata, linux@ssc.com __________________________________________________________________________ This show was actually billed as Unix Expo Plus I^2--a nod to the increasing interest in all things NT and Internet. In fact, in 1997 the show will no longer be called Unix Expo at all, it will be billed as IT Forum 97, Internet and Technology Forum. Despite a preponderance of Internet and NT related vendors and seminars, (and the ubiquitous presence of Bill Gates), the show went very well for Linux Journal and SSC. Various disasters struck, notably the loss of half of our booth display by UPS, but all in all it was quite successful. With the exception of Caldera, all of us Linux-types were stuck off in the corner of the show room, but we were still swamped by happy Linux and Unix users who had specifically made the trek in support of their favorite OS. The show in general had a lower attendance than was expected by show management, but the Linux contingent were doing quite nicely anyway. 2,500 Linux Journal and 1,600 WEBsmith magazines were given away. Many people subscribed right there at the show, many others went away clutching their SSC Unix References or books with dazed-but-happy expressions. Those of us working the booth made lots of contacts, and I must say it was a great experience meeting subscribers and customers who share such enthusiasm for Unix and Linux. Those NT developers should take note: Unix users are a dedicated bunch. New York was a blast, although I had to laugh when the locals got panicky when a 'Nor-Easter' blew through. I had to say, "Come on, guys. It's just raining." They should come to Seattle some time. --Lydia Kinata, SSC Products Specialist __________________________________________________________________________ Copyright © 1996, Lydia Kinata Published in Issue 12 of the Linux Gazette __________________________________________________________________________ DECUS IN ANAHEIM by Phil Hughes, phil@ssc.com __________________________________________________________________________ On November 11 through 13, Carlie Fairchild and I attended the DECUS show in Anaheim, California. While DECUS has generally been a good show for SSC, this show was small and we were the only Linux vendor attending. The best guess why is with UseLinux coming up in the same place in January, it was an easy show for people--vendors as well as Linux-heads--to skip. There was a series of talks on Linux presented by Jon "maddog" Hall and myself. Attendance was between 20 and 50, and I think we managed to make some converts. Carlie had also arranged for to speak to the local Linux user's group on Wednesday night. About 25 attended (including "maddog"). I presented a talk called Looking at Linux. Much of this talk focused on the commercial viability of Linux, which was an issue many of the group's members had been attempting to address on their own. In the talk I stressed four criteria for commercial viability: * reliability, * interoperability, * support * and capabilities. The talk was well received and the meeting turned into a informal discussion of Linux in general. I look forward to talking with these people again during the UseLinux show. --Phil Hughes, Publisher Linux Journal __________________________________________________________________________ Copyright © 1996, Phil Hughes Published in Issue 12 of the Linux Gazette __________________________________________________________________________ OPEN SYSTEMS WORLD/FEDUNIX by Gary Moore, ljeditor@ssc.com __________________________________________________________________________ The first week of November, I went to Washington, D.C. to attend Open Systems World/FedUNIX. While several dedicated Linux fans came by the booth, most of the people I talked to knew very little about Linux. Some were just cruising the booths, collecting whatever anyone was giving away, but we don't mind--the literature they picked up may spark some real interest later on. (One show attendee, in addition to taking a few of whatever we had also took the neat twirly thing we'd acquired from another exhibitor's booth.) Linux vendors in attendance were Yggdrasil Computing, InfoMagic, and Red Hat Software, giving me a chance to meet Adam Richter of Yggdrasil, Bob Young and Lisa Sullivan of Red Hat, and Henry Pierce and Greg Deeds of InfoMagic. Adding credence to Linux's worth in the minds of those with no free software experience was Digital Equipment's display of a DEC Alpha running Linux and Maddog's enthusiasm for the operating system. (By the time I got over to actually see the machine, someone was demonstrating Quake on it. I sat down and showed him a couple things I remembered from playing Doom--it was kind of surreal to be sitting amidst all the professional frumpery of the show while virtually running around swinging a very large and lethal axe.) Jeff Leyland of Wolfram Research, the makers of Mathematica, spoke about Wolfram switching to Linux as their development platform. There were other speakers I should have made time to hear, but I got caught up talking to people coming by our booth and asking about Linux. I know that after a few talks, the Linux booths would get flooded with people excited to check it out. I also heard Ernst & Young--well known for their accounting services among other things--apparently use Red Hat Linux in-house and asked IBM, with whom they contract for computer services, to support their Linux machines. (If you're from Ernst & Young, please send me some mail. We'd like to hear about how you're using Linux.) Adam Richter predicted a new version of Yggdrasil's Plug-and-Play Linux in the first quarter of 1997. At OSW they had pressings of their new 8-CD Internet Archives set, which includes several distributions, including a couple I hadn't heard of before. I would've felt cutoff from the world (yes, even in D.C. on election night) if it hadn't been for David Lescher, who set me up with some dial-in PPP access for my laptop, and David Niemi, who made some necessary tweaks to my chat script. I'm also grateful to Mark Komarinski, who put together a Linux talk on very short notice when I found I was dangerously close to having no time whatsoever to prepare one myself. The Santa Cruz Operation was there giving away copies of their Free SCO OpenServer. Someone who'd just acquired one of those gems asked me why she'd be interested in Linux if she had OpenServer; I noted its limitations and handed her a copy of Linux Journal, hoping to plant a seed. Some attendees were being less subtle, affixing prominently to their big blue IBM literature boxes the Linux bumper stickers we were giving away. --Gary Moore, Editor of Linux Journal __________________________________________________________________________ Copyright © 1996, Gary Moore Published in Issue 12 of the Linux Gazette __________________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next __________________________________________________________________________ "Linux Gazette...making Linux just a little more fun! " __________________________________________________________________________ SETTING UP THE APACHE WEB SERVER UNDER LINUX by Andy Kahn, kahn@cs.ucla.edu ______________________________________________________________________ This article is basically a summary of my experiences of setting up a web server under Linux. I will start with where/how to obtain Apache, then move on to installation, configuration, and finally how to get things running. This article is written from the point of view of my system, which is a Red Hat 4.0 system with v2.0.25 of the kernel. However, a "generic" installation or a similar setup should apply as well. * Where To Get Apache * Installation * Configuration * Starting/Running the Web Server * What's Next * About the Author ______________________________________________________________________ WHERE TO GET APACHE The obvious place to get the latest version of Apache is off of the Apache web site: http://www.apache.org. The source distribution file is apache_1.1.1.tar.gz while the Linux ELF binaries is apache_1.1-linux-ELF.tar.gz. Grab what you find is necessary... If you are running Red Hat Linux 4.0 like I am, during the installation process you are allowed to select whether or not you want to install a web server. If you do, Red Hat 4.0 includes the latest Apache and installs everything automatically with a default configuration. This default configurati on even RUNS correctly without any modifications! However, even in this case, please read my notes and preferences regarding installation in the next section. Typically, unless you need to add special modules or features, the binary distribution or the default Red Hat installation should be fine. However, let's say you wanted to run Apache as a proxy server. In this case, you would need the source so you can compile the proxy module as part of the binary. (Note: I have heard rumors that the binary included with Red Hat 4.0 has some bugs. I have yet to encounter any myself, so take that rumor with a big grain of salt.) __________________________________________________________________________ INSTALLATION I'm not going to cover compiling Apache since it's actually a fairly painless process and pretty well documented. Given that, let's move on to actual installation... Personally, I like to group all the web server files together in a centralized location. If you are installing this manually, then this is something you can do from the outset, and I highly suggest doing this since it will reduce administration headaches. If you had Apache installed automatically as part of the Red Hat installation procedure, then things will NOT be centralized! In fact, I thought the file placement scheme was one of the most confusing I've ever encountered. Here's what the Red Hat installation does: web server binaries /usr/sbin/httpd /usr/sbin/httpd_monitor config files /etc/httpd/conf/* log files /etc/httpd/logs/* web server root (contains cgi, icons/images, and html files) /home/httpd/* I found this to be really disorganized, so I ended up putting mostly everything under one directory (I left the binaries in /usr/sbin): mkdir /httpd mv /etc/httpd/conf /etc/httpd/logs /home/httpd/* /httpd rmdir /home/httpd You should end up with: /httpd/ /cgi-bin /cgi-src /conf /html /icons /logs And then to preserve the original Redhat file locations: ln -s /httpd /home/httpd ln -s /httpd/conf /etc/httpd/conf ln -s /httpd/logs /etc/httpd/logs Finally, I added this link since I felt that it made more sense: ln -s /httpd/logs /var/log/httpd If you are installing and compiling Apache manually, you may want to have the original source files also located under /httpd (or whichever directory you have). __________________________________________________________________________ CONFIGURATION Apache has three main configuration files: access.conf, httpd.conf, and srm.conf. If you are running Red Hat 4.0, these files will already be set with the correct directory paths. If you centralized the locations of all these files, but made those symbolic links as I mentioned above, things will still be fine since the symbolic links preserves where Red Hat installed everything. If you are doing a "generic" installation or have some other setup, then you will need to do the following: In access.conf, change/update these directory entries: In httpd.conf: ServerRoot /httpd In srm.conf: DocumentRoot /httpd/html Alias /icons/ /httpd/icons/ ScriptAlias /cgi-bin/ /httpd/cgi-bin/ Essentially, these are the necessary directives in the config files that need to be updated with the new "centralized" organization. For further configuration options, I will have to give the standard statement, "Please refer to the docs." :) __________________________________________________________________________ STARTING/RUNNING THE WEB SERVER To make a long story short, you simply to need to execute the binary "httpd". Typically, this is done when the system starts up, in one of the rc files. In Red Hat 4.0, it has more of a System V'ish startup style. In /etc/rc.d/init.d resides httpd.init, which is the script used to start and stop httpd. You can also execute this by hand if you find the need. For other systems (or a manual install), I suggest starting httpd after most other services have started (i.e.: put it in rc.local). A simple line such as /usr/sbin/httpd & will suffice. Obviously, it must start after tcp/ip networking has been started. :) __________________________________________________________________________ WHAT'S NEXT Needless to say, I didn't cover actual configuration options and how to manage your web server. The configuration options I leave to the Apache manual. Managing the web server itself depends on what kind of web site you want to run. My own system does not run a "real" web site; in other words, I don't advertise it for anything because it serves no real purpose other than for my own experimentation. However, you are more than welcome to take a look at it since it does have a bunch of Linux related links to it. The URL can be found at the end of this article. Other than that, I would love to hear any comments and/or criticisms you may have about what I wrote. Originally, my plan was to write a monthly article about running/managing a web server under Linux. However, short of actually writing a manual on configuring Apache (which the Apache documentation is good enough as a reference), I don't know what else to write about since there may not be all that much to write about. However, one idea for a monthly thing that might be good is to collect hints, tricks, and other useful information related to running a web server under Linux. Think of it more as a "2 cent tips for a linux web server." If anyone is interested in this, please drop me a note! * Email: kahn@cs.ucla.edu * Home page: http://www.cs.ucla.edu/~kahn * His machine: http://vivian.cs.ucla.edu __________________________________________________________________________ Copyright © 1996, Andy Kahn Published in Issue 12 of the Linux Gazette __________________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next __________________________________________________________________________ "Linux Gazette...making Linux just a little more fun! " __________________________________________________________________________ [IMAGE] WELCOME TO THE LINUX WEEKEND MECHANIC! Published in the December 1996 Edition of the Linux Gazette Copyright (c) 1996 John M. Fisk The Linux Gazette is Copyright(c) 1996 Specialized Systems Consultants Inc. __________________________________________________________________________ Time To Become... The Linux Weekend Mechanic! [IMAGE] You've made it to the weekend and things have finally slowed down. You crawl outa bed, bag the shave 'n shower 'cause it's Saturday, grab that much needed cup of caffeine (your favorite alkaloid), and shuffle down the hall to the den. It's time to fire up the Linux box, break out the trusty 'ol Snap-On's, pop the hood, jack 'er up, and do a bit of overhauling! __________________________________________________________________________ Table of Contents * Welcome to the December Weekend Mechanic! * The Mailbox * A Tcl/Tk Tar Viewer * (Tcl/Tk Tar Viewer Source) * Closing Up Shop... __________________________________________________________________________ [IMAGE] Welcome to the December Weekend Mechanic! Well, I'm afraid that the 'ol Weekend Mechanic is going to be a short one this month. I've got six classes this fall and have finally reached the point in the semester where I guess they think we're smart enough to start actually doing things! And so, we're all doing things... LOTS of things, as a matter of fact. That hallowed barometer of academic industriousness, Euclid's Little Known "Shave-To-Face" Ratio, is falling predictably and the No-Doze blood titers are reaching therapeutic levels. As they say in Tennessee, we're all starting to look like a bunch of rugs... "...walked all over, drug outside, and beat with a stick!" Anyway, we're all surviving. All of you out there in Academia Land know what I'm talking about; all of you who've run the gauntlet already and have achieved "A Real Life" will smile knowingly. (And will smile to yourselves, knowing that there is no such thing as "A Real Life") I've been eating a generous portion of Humble Pie here recently after last month's Tar Tricks and Find faux pas. I sincerely apologize for any mis-information and want to gratefully thank all of you who took the time to drop a note and provide more accurate information. I've gotten permission from a number of writers to include their letters which can be read below. They add a good deal more light to the subject! Thanks guys! Also, I did manage to eke out a bit of "recreational programming" time and hacked together a prototype tar archive viewer. This is still pretty alpha stuff, but it appears to be relatively stable and I've actually been using it. Any of you who are interested in Tcl/Tk might enjoy hacking away at this. I'll continue to tinker around with this and, by January or so, just might have a reasonably working version for all of you to play around with. If you're interested, have a look at it below. Anyway, hope you enjoy! John Saturday, 23 November 1996 __________________________________________________________________________ [IMAGE] The Mailbox As I mentioned above, I've been eating a good deal of Humble Pie in the past couple weeks after last month's articles on using tar and find. Actually, everyone who wrote was very gracious AND took the time to provide more accurate information. I was impressed by the spirit in which this was done: no one was vindictive, no one was demeaning (although there were a few "raised eyebrow" type letters :-). Anyway, I owe a great debt to all of you who took the time to write. I REALLY appreciate it. And to the school teacher from Des Moines, I'm almost done... I will always RTFM I will always RTFM I will always RTFM I will always RTFM I will always RTFM I will always RTFM... Thanks folks. Here's the letters... __________________________________________________________________________ Date: Fri, 01 Nov 1996 15:32:09 +1100 From: Paul Russell To: fiskjm@ctrvax.Vanderbilt.Edu Subject: Linux Weekend Mechanic: November Edition of the Linux Gazette (#11) Hi John, Just reading Linux gazette for the first time, and stumbled upon your Weekend Mechanic page. I'm sure you're going to get more mail about this, but I read with some astonishment your "More tar tricks" section. My Linux box is currently about 1000kms away, but I believe that the "tar -tvzf file.tar.gz |tr -s ' ' |cut -d ' ' -f8 |less" can be replaced with "tar tzf file.tar.gz |less". I liked it though. If you want a useful pipes example, how about a "Oops! I untarred in the wrong place and I want to clean up!" example: tar tzf file.tar.gz | xargs rm -f 2>/dev/null tar tzf file.tar.gz | sed 's:[^/]*$::' | sort -ru | xargs rmdir 2>/dev/null Analysis and improvement is left as an exercise for the reader. 8-) Enjoy, Paul. -- Paul.Russell@RustCorp.com.au "Engineer? So you drive trains?" Lies, damned lies, and out-of-date documentation. Currently contracted to Telstra, Sydney. __________________________________________________________________________ Date: Tue, 05 Nov 1996 11:19:09 -0500 From: "James V. Di Toro III" To: fiskjm@ctrvax.Vanderbilt.Edu Subject: LG #11 Weekend Mech. Just a few nits on a couple of the things in this piece. tar ... Well it sure showed off some neat features of some utilities, but what you did with that first line can be solved by omitting one character from the tar options. tar -tzf | less == tar -tvzf |tr -s ' ' |cut -d ' ' -f8 |less which vs. type ... which will also give you similar results on aliases and built-ins: which ls ls: aliased to /bin/ls $LS_OPTIONS which complete complete: shell built-in command. This is with tcsh 6.05, YMMV. -- ================================================================ /| |\ James V. Di Toro III | "I've got a bad feeling / |_| \/\ System Administrator, GATS, Inc.| about This" |()\ / || karrde@gats.hampton.va.us | |---0---_| W: (757) 865 - 7491 | -various \ / \ / ^:::^ (Just a quick note about this: James is right, 'which' works as he wrote above if you're using tcsh [because it is a shell built-in??]. Those of you running BASH and using the 'which' executable will find that the executable does not return information on aliases, shell functions, and shell built-in's. I wrote James back after trying this and he concurred.) __________________________________________________________________________ Date: Wed, 06 Nov 1996 11:14:51 +1100 From: Keith Owens To: fiskjm@ctrvax.Vanderbilt.Edu Subject: More on locate and update Saw your note on locate/find in LJ #11. According to my manual page, "locate lynx" is equivalent to "locate '*lynx*'", locate does automatic insertion of leading and trailing '*' if the pattern contains no metacharacters. "locate 'lynx*'" will only find files that start with lynx (i.e. no leading directory or /). I find the locate command and its associated updatedb command to be very useful for indexing ftp lists and cdroms. Most sites and cdroms have a list of the files in one form or another but they are not easily searched. Some are in find format (directory included in file name), others in ls -lR (directory is separate). I created updatedb.gen (from updatedb) to read a file list and build a locate style database, locate.gen then searches that database. For example, go to sunsite, /pub/Linux, download 00-find.Linux.gz, then run updatedb.gen sunsite 00-find.Linux.gz which builds /var/spool/locate/locatedb.sunsite. "locate.gen sunsite file" does an instant search of sunsite for the file, obviously you have to fetch a fresh copy of the listing occasionally. Instead of searching several InfoMagic cdroms for a file, mount the first one and updatedb.gen im /cdrom/00-find "locate.gen im file" then does a very fast search of the entire set of InfoMagic cdroms and can be done without mounting any cdroms. updatedb.gen and locate.gen are attached. updatedb.gen works out which format the input file is in and selects the field(s) containing the filename. -- *** O . C. Software -- Consulting Services for Storage Management, *** *** Disaster Recovery, Security, TCP/IP, Intranets and Internet *** *** for mainframes (IBM, Fujitsu, Hitachi) and Unix. *** (The idea of using 'locate' on a CD collection sounds like a GREAT idea. I've not yet had the time to try it, but plan to give this little gem a whirl!) __________________________________________________________________________ Date: Fri, 08 Nov 1996 09:21:52 -0600 (CST) From: John Benavides To: fiskjm@ctrvax.Vanderbilt.Edu Subject: How to use "-name" option on find command In your column, Weekend Mechanic, in the "Linux Gazette" (Oct 1996 issue) on your Web page at: http://www.ssc.com/lg/issue11/wkndmech.html You say: > The way that it should work is that you give locate a filename pattern > which it searches for. Such as: > > locate lynx* > > However, when I tried this on my system, it simply returned nothing. > Using locate lynx worked like a charm. > > Got me. Whenever you use any command with arguments that need to contain wild card characters, don't forget to quote those wild card characters from the shell. I teach this to my students in my introductory UNIX class. Remember the shell gets first crack at the wild card. So the shell will try to match "lynx*" with any file names in your local directory. Use the echo command to see what the shell expands your command line to for the buggy command: echo locate lynx* This will give you an idea of what the "locate" command is really searching for. Any one of the three commands below will prevent the shell from processing your wild card pattern. locate "lynx*" locate 'lynx*' locate lynx\* The same is true for your other example with find: find / -name "lynx*" -print find / -name 'lynx*' -print find / -name lynx\* -print Regards, John -- +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + John Benavides | Hewlett Packard - CxD + + 3000 Waterview Parkway | e-mail:benavide@rsn.hp.com + + Richardson, TX 75080 | (972) 497-4771 Fax: (972) 497-4245 + +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ __________________________________________________________________________ Date: Fri, 08 Nov 1996 16:34:23 +0100 From: Robert Budzynski To: fiskjm@ctrvax.Vanderbilt.Edu Subject: why locate didn't work as expected... Hi John, Why didn't 'locate lynx*' work for you ? Well, here's what happens when you issue that command: first, bash (or any standard shell) attempts to match the pattern 'lynx*' against names of files present in the _current_ directory. If it finds any that match, they are _all_ substituted into the command line and passed on as arguments to locate. This sure isn't what you want... If (as was apparently the case) none are found, the pattern is left unexpanded... so why didn't it work? Well, to quote the man page: If a pattern is a plain string -- it contains no metachar- acters -- locate displays all file names in the database that contain that string anywhere. If a pattern does con- tain metacharacters, locate only displays file names that match the pattern exactly. As a result, patterns that contain metacharacters should usually begin with a `*', and will most often end with one as well. The exceptions are patterns that are intended to explicitly match the beginning or end of a file name. So, there's your answer! 'Match the pattern exactly' means here that the fully qualified pathname (starting with a /) must match. The other lesson here may be summarized with another quote from the locate(1) man page: Patterns that contain metacharacters should be quoted to protect them from expansion by the shell. This applies as well to patterns passed to 'find', i.e. $ find /usr/local -name 'lynx*' -print is the 'politically correct' command line to use. Merry Linuxing! -- ###################################################################### Robert J. Budzynski Institute of Theoretical Physics Warsaw University Warsaw, Poland ###################################################################### __________________________________________________________________________ Date: Sat, 09 Nov 1996 02:11:00 +0000 From: Phil Bevan To: fiskjm@ctrvax.Vanderbilt.Edu Subject: LG issue 11 - find Hello John, Glad to see you've not abandoned the Gazette totally. One thing though on your article about 'find'. I've noticed in the past when using the find command, it has not found all the files when using wild card characters such as '*' (as in your example find /usr/local -name lynx* -print). I discovered from one of the linux newsgroups, that the shell tries to expand lynx* first, and it is possible that find will not search all the directories. To stop bash from expanding the filename enclose it in single quotes, as below: find /usr/local -name 'lynx*' -print Bet you I'm not the first to point his out :) Regards Phil -- This Sig intentionally left blank __________________________________________________________________________ Again, thanks to EVERYONE that took the time to write! I know that y'all are busy and I appreciate corrections, clarifications, and suggestions. John __________________________________________________________________________ [IMAGE] A Tcl/Tk Tar Viewer I'm going to apologize at the outset -- it's Saturday and I've still got a small mountain of work to do for the upcoming week and so I just don't have the time to write an awful lot about this. I'll try to summarize the highlights of what I was attempting to do and what actually worked. To recap from last month, I'd been trying to find a way to get a simple listing of all the files in a tar archive. As was pointed out, this can be done using: tar -tf file.tar tar -tzf file.tar.gz depending on whether the file is a tar or tar+gzip file. (I'm assuming that you're using GNU tar, BTW, not all implementations of tar support the '-z' option which uses 'gzip' to either compress or uncompress an archive.) The purpose for doing this was to allow you to get a tar listing and then use this as an argument to tar to print that file to standard output. For example, if your tar archive looked like: xtoolwait-1.0/ xtoolwait-1.0/xtoolwait.c xtoolwait-1.0/Imakefile xtoolwait-1.0/COPYING-2.0 xtoolwait-1.0/xtoolplaces.diff xtoolwait-1.0/CHANGES xtoolwait-1.0/README xtoolwait-1.0/xtoolwait.man xtoolwait-1.0/xtoolwait-1.0.lsm then using a command like: tar -tzOf xtoolwait-1.0.tar.gz xtoolwait-1.0/README |less ^^^^^^^^^^^^^^^^^^^^ would allow you to view the file 'README' by piping it to 'less'. That was the reason for needing to get a listing of just the filenames in the archive -- to be able to invoke tar with the '-O' option so that it would output the results to standard output. Now, the thing is that tcl/tk will allow you to capture the output of a file using 'open'. Coupled with 'fileevent', this allows you to direct the output of a command to a text widget for viewing and editing. So this was the direction I was going. I've actually got this working now. It's definitely NOT a showpiece of tcl/tk coding: this 'ol thing wouldn't win any programming contests. Still, as a quick prototype (I hacked it out in two days...), it gave me some ideas about how to put together something a bit more sturdy. Basically, as it stand right now, its features include: * a directory browser that allows you to select files and navigate around your file system easily * a tar archive viewer which lists the files contained within the archive * a File Viewer which can be invoked by double-clicking on a file; it allows you to edit, save, and print the file * a Save As dialog box which allows you to save (or append) a file in the tar archive to disk * a small Print Dialog which prompts you to input a file print command (defaults to 'lpr') and print the file As I said, it is actually working right now and I've used it several times over the past couple weeks. Parenthetically, I've been using Tcl 7.5 and Tk 4.1 for development. I don't know if any of you have tried compiling this from sources. Using the supplied makefile, I was unable to get the shared libraries to compile. It's been a while since I did this but, if memory serves me correctly, it fails on some system test and thus refuses to compile the shared libs. I did find, however, that by adding '-fPIC' to the CFLAGS, copying the *.o files to a separate directory, and then using something like: gcc -shared -Wl,-soname,libtcl.so.0.7 -o libtcl.so.0.7.5 *.o that I was able to compile a working shared library. I'd be interested in hearing from anyone else who's tried to compile tcl or tk from sources. I'll quickly admit that I'm still a neophyte when it come to C and UNIX/Linux programming. The above works, but if it is "Not The Right Way" then please drop a note. That said, let's take the penny tour... Directory Browser To begin, when you start the tarvu program, it displays this directory browser. The path is displayed at the top along with the name of the file (if any) which has been selected. You navigate to a new directory by either: * clicking on the 'Up..' button to go to the parent directory * double clicking on a directory within the listbox * entering a valid directory name in the 'Path:' entry box If you single click on a file, then its name is displayed after the 'File:' label. Single clicking on a directory has no effect. If you click on a tar or tar+gzip file, then you can use the 'View/Edit...' button to view a listing of the contents of the file. Double clicking on the file has the same effect. Tar Listing Browser After a tar or tar+gzip file has been selected, the 'Tar Browser' allows you view the contents of the tar archive. Now, you can use the full set of operation buttons to either view/edit, save, or print a specific file within the tar archive. Single click on a file within the listing and then click on any of the operation buttons. If you double-click on a file, then it defaults to the file viewer: Directory Browser The viewer allows you to view, edit, save, or print the contents of a file within the archive. The name of the file is displayed in the upper left hand corner. To either widen or lengthen the edit window, click on 'Widen Window' or 'Lengthen Window'. Now, you can manually resize the window, but doing so does not automatically resize the text widget. I've not been successful in figuring out how to do this, although I suspect that it can be done. Until then, use the buttons... :-) If you edit the file and want to save it to disk (NOT back to the archive), then click on the 'Save As...' button. This brings up a directory browser which allows you to save the contents of the editing buffer: Directory Browser This allows you to save or append the contents of the editing buffer to a file. The directory browser features work in a fashion similar to what was previously described. Because this was a quick hack, I simply coded another proc to provide the save/append feature for a FILE within the archive. So, if you go back to the tar archive list, select a file, and then click on the 'Save...' button, you'll see a directory browser which looks similar to the one above. Finally, if you click on the 'Print...' button, a small dialog box is displayed: Directory Browser You simply input the command to print to your printer and click on the 'Print' button. Pretty simple, eh? As I mentioned before, this is no paragon of programming brilliance. This was a rather quick hack, but it does show what can be VERY EASILY done with even the basic tools of Tcl/Tk. If anyone is interested in this, you can get the tcl script here: TCL TAR ARCHIVE VIEWER SOURCE For those of you using Netscape, hold down the Shift button and single click with the left mouse button on this link to save the file to disk. Call it whatever you'd like, and then set the permissions to something like: % chmod 755 tclvu.tcl I've got mine symlinked to 'tclvu' to make it easier to remember. In all honesty, there are LOTS of things that could be done to make this more useful or efficient. Just a couple TODO's that come to mind include: * Allow viewing of regular files in a directory * Allow unarchiving a tar archive to a selected directory * Allow archiving of selected files to an archive * Add online help The thing is, there are all kinds of fun things that can be done. This simple tcl/tk wrapper for tar just lets you view, edit, and print files at the moment. The tar manual page can give you further ideas about what could be done. For those of you needing a "real" tar utility, I'd strongly suggest using Miguel de Icaza's GREAT program Midnight Commander. You can pick up the sources at any ftp site which mirrors the GNU utilities such as: GA TECH'S FTP SERVER Also, there's a program called xtar which is found at the ftp.x.org ftp site. I've honestly not seen this mentioned anywhere and yet it's a VERY handy little program that allows you to browse and view the contents of a tar archive. You'll need the Motif development libraries to compile this, however. Well, as I said, this was a pretty quick tour. Please feel free to hack away at this and enjoy it. I tried to comment the code, so you should have some idea about my mental state when the thing was written. Hope you enjoy! John Saturday 23 November 1996 __________________________________________________________________________ [IMAGE] Closing Up Shop... Well, I'd hoped to include a lot more stuff in this month's WM column, but time has completely gotten away from me and it's already almost dinner time (and no homework done yet... :-). I must admit that I enjoy doing this a LOT more than Linear Algebra (...sorry Dr. Powell, it's still a GREAT course :-) So, what are the rest of you guys working on out there...? I upgraded my home system over this past summer to a Cyrix P-166+ machine with a Triton II MB, 32 MB EDO RAM, Diamond Stealth Video VRAM graphics card, Toshiba 8X CDROM, and CTX 1765GME monitor. I dropped in the old Maxtor 1.6 Gig drive from my previous machine and have just gotten a second Maxtor 2.0 drive (so Linux can finally have its own drive!). I'll be installing this and reinstalling much of the system over Christmas Break. If this sounds like a brewing "mail brown-out", you're probably right... :-) I've also gotten pretty enamoured with Tcl/Tk as you might have noticed. This is a seriously fun programming environment. Now, I know that this isn't for everyone and there are folks who've tried tcl that just frankly didn't like the language. Still, there are a growing number of truly impressive add on's including tclX, BLT, Tix, and [incr tcl] that add a lot of nice features. I'd especially commend to the Tix extension. It provides a set of meta-widgets such as directory browsers, tabbed windows, and the like. It precludes your having to code these types of windows and gives you a higher level widget to work with. If you're interested in this, then definitely run the demo program as it give you a IMPRESSIVE tour of the widget set. Finally, I've just gotten a microphone for my sound card (SB Vibra 16 PnP) and have been messing around with creating sound files. Pretty cool :-) I'm still not completely facile with all the basics, but I've gotten a few snippets recorded, including a "Happy Birthday" rendition (my wife and me) to our sister-in-law. It'd curl 'ol Lawrence Welk's toes, I suspect, but it was fun to send this rascal out via email. You know... "reach out and touch someone..." Well, here's wishing you Happy Linux'ing! Since I didn't have time this month do to a "Christmas Shopping List" of Linux goodies, I'll try to get this in next month so that after you return all those bottles of aftershave and the argyle socks, you'll know what to do with the money... :-) From our household to yours, Wishing you a Merry and Joyous Christmas Season! John Saturday 23 November 1996 __________________________________________________________________________ [IMAGE] If you'd like, drop me a note at: John M. Fisk Version Information: $Id: issue12.txt,v 1.1.1.1 2002/08/14 22:26:56 dan Exp $ __________________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next __________________________________________________________________________ [IMAGE] LINUX GAZETTE BACK PAGE Copyright © 1996 Specialized Systems Consultants, Inc. For information regarding copying and distribution of this material see the Copying License. __________________________________________________________________________ CONTENTS: * About This Month's Authors * Not Linux [IMAGE] __________________________________________________________________________ ABOUT THIS MONTH'S AUTHORS __________________________________________________________________________ Randy Appleton Randy Appleton is a professor of Computer Science at Northern Michigan University. Randy got his Ph.D. at the University of Kentucky. He has been involved with Linux since before version 0.9. Current research includes high performance pre-fetching file systems, with a coming port to the 2.X version of Linux. Other interests include airplanes, especially home-built ones. Larry Ayers Larry Ayers lives on a small farm in northern Missouri, where he is currently engaged in building a timber-frame house for his family. He operates a portable band-saw mill, does general woodworking, plays the fiddle and searches for rare prairie plants, as well as growing shiitake mushrooms. He is also struggling with configuring a Usenet news server for his local ISP. John M. Fisk John Fisk is most noteworthy as the former editor of the Linux Gazette. After three years as a General Surgery resident and Research Fellow at the Vanderbilt University Medical Center, John decided to "hang up the stethoscope", and pursue a career in Medical Information Management. He's currently a full time student at the Middle Tennessee State University and hopes to complete a graduate degree in Computer Science before entering a Medical Informatics Fellowship. In his dwindling free time he and his wife Faith enjoy hiking and camping in Tennessee's beautiful Great Smoky Mountains. He has been an avid Linux fan, since his first Slackware 2.0.0 installation a year and a half ago. Michael J. Hammel Michael J. Hammel, is a transient software engineer with a background in everything from data communications to GUI development to Interactive Cable systems--all based in Unix. His interests outside of computers include 5K/10K races, skiing, Thai food and gardening. He suggests if you have any serious interest in finding out more about him, you visit his home pages at http://www.csn.net/~mjhammel. You'll find out more there than you really wanted to know. Andy Kahn Andy Kahn is currently a graduate student in Computer Science at UCLA, praying to finish his Masters degree sometime in the foreseeable near future. His primary research area is in parallel I/O. On the side, Andy also does Unix System Administration at Activision, a well-known computer games company. He also has had previous jobs, including system administration and programming by masquerading as a Software Engineer. Andy has been an on and off Linux enthusiast since his first SLS v1.02 installation over 3 years ago. Jesper Pedersen Jesper Pedersen lives in Odense, Denmark, where he has studied computer science at Odense University since 1990. He expects to obtain his degree in a year and a half. He has a great job as a system manager at the university, and also teaches computer science two hours a week. He is very proud of his "child," The Dotfile Generator, which he wrote as part of his job at the university. The idea for it came a year and a half ago, when he had to learn how to configure Emacs by reading about 700 pages of the lisp manual. It started small, but as time went by, it expanded into a huge project. In his spare time, he does Yiu-Yitsu, listens to music, drinks beer and has fun with his girl friend. He loves pets, and has a 200 litre aquarium and two very cute rabbits. Robert G. Savage Robert G. "Doc" Savage received his BSE(EE) at Arizona State University in 1974 and is now a senior networking engineer working for a telecommunications consulting firm near St. Louis. An Internet veteran since the earliest days of the Arpanet, he has designed, engineered, installed, administered and consulted for a wide range of UNIX, Novell and Microsoft network systems. He enjoys listening to Garrison Keelor's radio broadcasts, reading Tom Clancy's books, acting in community theater, cruising in his 300ZX twin turbo, tinkering with a tower server in his living room (the hood is always up), and relaxing at the end of the day with his two Siamese cats and a pint of Guinness. Manuel Soriano Manual Soriano lives in El Perello, Valencia, Spain. He works for a Swiss based company called Dapsys S. A. that provides the Information Retrieval Imaging System called IRIS. His job calls for quite of bit of traveling. He's been in Swizterland, France, and most recently, Prince George, Canada. His FEddi-HOWTO article is the English translation of his article FEddi-COMO that appeared in the October issue. [IMAGE] __________________________________________________________________________ NOT LINUX __________________________________________________________________________ Thanks to all our authors, not just the ones above, but also those who wrote giving us their tips and tricks and making suggestions. Thanks also to our new mirror sites. Major "Not Linux" projects on my plate these days are the repair of a quilt and Thanksgiving. The "Sunbonnet Sue" quilt was made for my sister Gaynell when she about 5, and is turning into more work than I expected. But when I am finished, it will be beautiful again and will make a good Christmas present for her. Thanksgiving feels like an even bigger project than the quilt repair--I get to host this year, which means I do the major part of the cooking. I will be serving traditional Southern fare, since I was raised in Texas. I feel like I should already be cooking to be ready on time. At any rate, I am looking forward to visiting with family, and eating too much. :-) I am also looking forward to the long weekend--four days off from work feels like a vacation! Have fun! __________________________________________________________________________ Marjorie L. Richardson Editor, Linux Gazette gazette@ssc.com __________________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back __________________________________________________________________________ Linux Gazette, http://www.ssc.com/lg/ This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com