|
|
|
Our sponsors make financial contributions toward the costs of publishing Linux Gazette. If you would like to become a sponsor of LG, e-mail us at sponsor@ssc.com.
Linux Gazette is a non-commercial, freely available publication and will remain that way. Show your support by using the products of our sponsors and publisher.
|
The Answer Guy |
TWDT 1 (text)
TWDT 2 (HTML)
are files containing the entire issue: one in text format, one in HTML.
They are provided
strictly as a way to save the contents as one file for later printing in
the format of your choice;
there is no guarantee of working links in the HTML version.
Got any great ideas for improvements? Send your comments, criticisms, suggestions and ideas.
This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com
The Mailbag!Write the Gazette at gazette@ssc.com |
Contents: |
Date: Wed, 6 Jan 1999 16:59:49 +0100
From: "W.N. Beukers",
beukers@ampcometal.nl
Subject: Set up Linux as server
I am planning to buy a Linux version to use for a server i am setting up. The main things I want to have Linux do is ask as a proxy, a mail and a fax server.
Linux will be running on a PC together with windows 95 and handles all the outgoing faxes, all e-mail communications (internal and external). Also these users have to have the possibility to to on the Internet by means of the proxy server.
Last wish I have is a graphical interface to work with as I am a novice but I still want to set up this system and maintain it. What Unix version is the best, easiest (red hat, Susie, or Debian)
Can you tell what I need as a basis and what additional packages I need so that I can order it.
--
Wilko Beukers
Date: Wed, 6 Jan 1999 11:17:18 -0500
From: DJ FALCIONE, falcione@bettis.gov
Subject: Idea for an article
I have an idea for an article.
How about a primer on how to set up one's sound card to do true MIDI?
I have an Ensoniq AudioPCI card and have been successful in getting it to play WAV files via the audio out port and also simulated MIDI using TIMIDITY.
But I can't figure out how to get TRUE MIDI rendering like I get with the same card in Windows 95.
Is this a driver issue? Thanks,
--
Dean Falcione
(Check out Linux Journal issue58. It has an article on Csound that discusses MIDI issues. It's on-line too at http://www.linuxjournal.com/issue58/3187.html. --Editor)
Date: Tue, 05 Jan 1999 22:42:11 -0600
From: Romulo Rodriguez,
romulorc@earthlink.net
Subject: Celeron
I would like to know whether Linux will have any problems with the Intel Celeron Processor. Thanks,
--
R Rodriguez
Date: Wed, 6 Jan 1999 10:33:39 -0600
From: "MARK -The Great- ZOLTON",
mcz@wheat.ksu.edu
Subject: Advanced Linux/Java Concepts
At my university, most new programming courses are taught in Java. Because of that, I have become quite apt in programming for such an environment. However, when the time comes that I have a great idea for and application for Linux, I feel somewhat bad about programming it in Java as it is not native to the system. I feel particularly left out when it comes to gathering information from the system. For instance, I am currently working on a set of Zip disk management tools and I have begun coding the core of the application in Java. Since Java is the only language where I have any real experience programming a GUI, I plan on using the Swing widget set to make a slick GUI. Anyway, to manipulate the Zip disks, I make several calls to basic system functions like umount, mount, eject. While this is fine for simply manipulating the disk, I would also like to gather information about the disk... such as, is there a disk in the drive, is it already mounted, etc... Can you see where I'm going. Although Java can do quite a bit, Its platform independence seems to limit it. I would like to know if there is a Java package designed for use with Linux that can provide me information about the system. Or, if that does not exist, does anyone know of a simple, effective method of gathering information from the system? Maybe parsing output from other Linux utilities?? Thanks
--
Mark
Date: Wed, 6 Jan 1999 11:30:12 -0600
From: "MARK -The Great- ZOLTON",
mcz@wheat.ksu.edu
Subject: Getting started with programming for Linux
Although I am a somewhat experienced programmer, I find myself wanting to know more about programming for Linux. I have a little C under my belt as well as C++ and a lot of Java (from university classes) and I'm just learning Perl. I am very interested in programming for Linux (specifically X), but I don't know where to start. I don't know enough C to begin fiddling around with other people's source, so I'd like a general introduction to programming for Linux (how to interact with the system, how to program a GUI using GTK, QT, etc..., and how to write Window Maker docklets). However, seeing as how I have only a little knowledge of C, if there is an introduction which provides said things along with intermediate C programming, that would be the best. Does something like this exist and would the O'Reilly X books be of any use at this stage in my development? Thanks again,
--
Mark
Date: Thu, 07 Jan 1999 01:01:58 -0600b4
From: Bob Counts,
rcounts@troi.csw.net
Subject: Gzip and tar files
I am looking forward to reading the gazette but for right now the only machine I have is a Windows 98 that is connected to the Internet. I would like to download the Gazette but I don't have any way to expand and un-archive gzip and tar files in Windows. Is there any software that you know of that will do this. I am still in the infancy stage when it comes to Linux and I need all the help I can get. I know your magazine will help, but until I get PPP going on my Linux machine I am stuck. I think I should mention that my Linux and Windows computers are separate boxes. Thanks
--
Bob Counts
Date: Tue, 12 Jan 1999 12:58:01 +0100
From: Ottar Engstrøm,
Ottar.Engstrom@lfk.mil.no
Subject: Matrox Productiva G100
I am trying to configure X on my PC, XF86config asks me for several questions I can not answer. Like RAMDAC,Chipset ect on my Productiva G100 8MB AGP graphic card? I will be pleased if You could answer me.
--
Ottar
Date: Tue, 12 Jan 1999 18:01:19 +1100
From: "deves",
deves@eisa.net.au
Subject: EMM 386 Emulator
I'm trying to find the EMM 386, can you give me any addresses for download of this emulator? As my computer needs it to play most games including the famed POKEMON game Do you think I should get this emulator, or wait for the PC game? P.S I still want those addys!!!!
--
deves
Date: Mon, 11 Jan 1999 10:53:14 -0500
From: GBE, hawk@valinet.com
Subject: new user
I'm new at Linux(RH5.2) and I've a question. When I download files using Netscape4.04 it puts it in my root directory. Now I guess I'm a little anal-retentive but I would like it to go in a folder called "download" or some other place. When I went to upgrade my XFree86 the install directions said that it was suppose to be in /var/tmp ??? Now I can mkdir for the folder, do I put permission on it? Do I have to link it to somewhere?
Please give me commands to do this, if you can. Thanks
--
Gene Euvrard
Date: Fri, 8 Jan 1999 19:32:05 +0200
From: "Volkan Kenaroglu",
volkan@sim.net.tr
Subject: FTP Server
I installed Debian 2.3 recently. And I want to build a FTP server. h All I need to know is how can do this :) But I never tried to do so I don't know even where to start. Please help! Any information would be appreciated. thanx Linux-mates.
--
Volkan
Date: Thu, 7 Jan 1999 23:28:18 -0800 (PST)
From: Shanti Mohan,
kas6719@yahoo.com
Subject: Trouble on Linux
This is regarding CD-record software available on Linux. When a CD is doing a actual write to the CDR, and some other user on the server tries to remove a very big file using "rm" (the file is about 400MB) the CD-record program stops writing. This also happens when a user is trying to copy amount of data on the server. Is there any solution to this problem as it means that my server is locked while write is in progress.
Could you please help ? Thanks
--
Shanti Mohan
Date: Thu, 14 Jan 1999 09:30:57 +0000
From: Andreas Neukoetter,
ti95neuk@de.ibm.com
Subject: Idea for an article ...
I'm one of the poor guys in Germany who has to use an Provider for his Web server ... instead of hosting it myself.
The biggest Problem is to keep the "online"-site in sync with the "off-line"-one. Since i choose a cheap-provider i have no telnet-access to "my"-server and can't use the wget- or mirror-approach.
I've written some scripts to make "crc32"-lists (in fact just sum-up the bytes since my crc32.pl just don't works) in Perl and execute it "off-line" and "online" (as a cgi ... the only way to run programs on the server). These lists are compared and different files are synced ... it works... but i don't find it satisfying :(
has anybody a "better" solution ???
--
Anti
Date: Sat, 16 Jan 1999 14:26:26 -0000
From: "Jonathan Homer",
jhomer@pulsesoftware.demon.co.uk
Subject: Re Telnet!
Need help with the Telnet Daemon. It works perfectly accept when you connect via Windows or NT (sorry). It does as far as I can tell a Username lookup. Since NT or WIN 95 does not run such a service there is a pause of 10 seconds or so. I have not yet found the way to switch this lookup off. Can anyone help me? Thanks
--
Jon
Date: Sat, 16 Jan 1999 06:05:39 -0800 (PST)
From: Steve Foster
steve_p_foster@yahoo.com
Subject: Xaw3d Documentation
Just a short note, is there any documentation available for the wigit set, as I have used the example in LG 2(?), and fancy a crack at some other styles.
--
Steve
Date: Fri, 22 Jan 1999 19:29:55 -0500
From: "Jeffrey S. Flowers",
ftn@bellsouth.net
Subject: Linux in ROM
The recent letters about putting Linux on a floppy is interesting to me but what I am interested in is putting Linux in ROM. I have a used 486 and what I would like to do is buy a ISA card that emulates a IDE hard drive. I've seen them advertised but to work with Linux wither a custom driver would be needed or Linux would have to be set up to use the BIOS for all disk accesses.
Does anyone know of anyone doing this kind of thing? Thanks
--
Jeffrey
Date: Fri, 22 Jan 1999 10:14:45 -0600 (CST)
From: Andy Kraut,
opie4624@wagner.mtco.com
Subject: Help Wanted -- Client 32
My High School uses Novell's Client 32 for all of their Internet connections. This means that only the main server has an IP address. Does anyone know how to make Linux (Red Hat 5.2) use the Internet over this? IPX is the only protocol in the Network settings of the Win 95 machines here. Thanks in advance,
--
Andy Kraut
Date: Wed, 27 Jan 1999 10:51:22 +030
From: "bman", biz_bman@hotmail.com
Subject: A Question Please
First, I like your web site, and second, I have a question.
I have two 3com modems v90's one is internal "3com v90 voice" and the other is External 3com v90 .... I am using each one with a Linux System and have them connected to each others by a telephone line .... my problem is that I don't get the 56 speed that v90 should have. I get 33 or some thing like that... is there a way to tune up the modems in Linux operating systems? Thanks a lot.
--
bman
Date: Sun, 24 Jan 1999 17:10:18 -0600
From: "Aaron Becker", abecke2@uic.edu
Subject: Help with AGP Riva TNT and Linux
I just installed Red Hat Linux 5.2, and I don't know how to configure it to utilize my 16 MB STB Velocity 4400 AGP graphics card. That card is not in the card database, unfortunately. I can start the X Window System, but, the resolution is only 320 X 200. This resolution renders X virtually unusable. I would appreciate any help anyone can give me on this subject. Please bear in mind that I am extremely inexperienced with Linux when you respond. Thanks
--
Aaron
Date: Wed, 27 Jan 1999 21:07:17 +0100
From: "Oriol Molist",
omsv@mail.cotursa-hotels.com
Subject: Suggestion
I am a Linux user. I have setup several PCs as X-terminals, but it is quite boring and takes too much time. I want to create a script that allows the easy setup of a X-terminal with lpd and ghostscript printer support, sharing the same NFS root for all xterminals, these would allow to install a network of xterm-PCs easily. Imagine that you can have the same of windows terminal server without having to pay anything.
Please if anyone is interested in helping me, send me e-mail. thanks
--
Oriol Molist
Date: Wed, 27 Jan 1999 00:18:25 -0500 (EST)
From: jvu001@umaryland.edu
Subject: Help: Linux, laptop, PCMCIA SCSI
I have a Toshiba 220CDS laptop and it once ran Linux on a 800 MB partition, but I deleted the partition because I needed the space. I have a PCMCIA SCSI card and am thinking about getting the Iomega Jaz drive (either 1 or 2GB) and installing a Linux partition on that external drive. My question is: Is this possible? Has anyone attempted this and has successfully installed Linux on it? I'm thinking that I would have to use DOS to load the PCMCIA drivers first and then use loadlin to boot the Linux partition. Am I correct in thinking that this will work? Thanks.
--
John
Date: Tue, 26 Jan 1999 08:58:27 -0600
From: Pete Nelson,
pete.nelson@ci.stpaul.mn.us
Subject: Serial Headache
I had been trying to set up a PPP connection from my Red Hat 5.2 box at home to various ISPs. It was so problematic, I ended up writing a script that would begin dialing and fork an xterm with a 'tail -f /var/log/messages' so I could watch it fail.
I ironed out all the bugs in my chat script (Linux would be no fun if everything worked perfectly out of the box!), and pppd would connect - but it would then bomb out.
The messages were always the same before pppd died :
pppd[xxx]: Serial connection is not 8-bit clean. pppd[xxx]: Problem: bit 7 always 0.
So it looks like a serial problem. But I haven't found a fix with 'setserial' or anything in my BIOS, or in the PPP setup. My guess is it's something incredibly simple that I'm just completely overlooking, but no one else that I know can figure it out, either.
If anybody knows the answer to this problem, I'd really like to hear it ( and you can even throw in a 'DUH!' if you so desire - I'm almost positive there's a real easy answer to this! ) Thanks.
--
Pete Nelson
Date: Mon, 25 Jan 1999 14:49:53 +0800 (HKT)
From: Romel Flores, rom@ncc.edu.ph
Subject: (newbie question) messed up terminal
tty1 of my Linux box went gaga and can't accept the enter key. It just displays the ^M when I press the enter key and ^? when I press backspace.
How do I solve the problem without resetting the machine. Thanks.
--
R. Flores
Date: Fri, 29 Jan 1999 12:43:06 -0800
From: "Rick Lim", rick_lim@bctel.com
Subject: PPP dialin and out from the same box
I can connect to my ISP (PPP) which uses dynamic IP address. I can then turn around and configure the same serial port for a static IP (PPP) for someone to dial into the same box.
But if I now try to connect to the ISP my box has the same static IP that was assigned to the port and it will not let me connect.
Is there a way to PPP out dial using a static IP address and still have a PPP in dial and assign a IP from my LAN? Thanks for any help.
--
Rick
Date: Fri, 29 Jan 1999 11:13:38 -0500
From: Dean Maluski, n0ety@home.com
Subject: Netscape
I tried using tip to have Netscape use Mail directory. OK now I created all my sub-directories in Mail but they start with Caps so Inbox is not the same as inbox.
Is there any way to make them the same? Preferably Netscape looking at inbox & not Inbox. One cool thing is now when I look at message center I have a choice of looking in Inbox or inbox, and all directories within /Mail using Netscape.
--
Dean
Date: Tue, 26 Jan 1999 22:19:36 EST
From: tomf7@hotmail.com
Subject: Linux
So I finally got Red Hat 5.2 installed after 8 tries, now what. It seems like a fun game toy, but is it really useful? I can't get Netscape going because the server doesn't have a DNS even though I put one in for it. The xplaycd reads the CD, but no sound. The time I spend on this system doesn't make up for the cost. Linux has at least light years to go to catch up with anything that runs .
Date: Wed, 06 Jan 1999 16:43:22 +0100
From: Christian Schaller,
frostking@linuxrising.com
Subject: RE:Anouncements by Sun & TrollTech
After seeing the latest issue of Linux Gazette I have a couple of comments.
1) I often feel that the stories covered in Linux Gazette and thereafter Linux Journal are dated, I mean these license announcements are old and heavily debated and Slashdot etc. As a Journal subscriber I for one would appreciate if the currentness of the stories covered in the gazette and the journal was more close to date of publication than today.
2) As for the articles content I have one issue I think should be brought up when the "open-source" licenses are discussed. And that is the fact that these licenses are a bigger threat to the free software community than proprietary software. Most of these licenses makes it impossible to reuse code and they undermine the success criteria that GPL/LGPL and BSD licenses gives open source software, by enabling anybody to modify or include code or complete software packages in their own software. If these types of licenses are allowed to be accepted as just as good, the best scenario we might hope for is that anybody making free software "just" have to include 20 different licenses with the software witch have to consist of 15 different patches. I hope SSC through their publications takes care not to support such a development.
Sincerely,
Christian Schaller
(The realities of life are both LG and LJ are monthly magazines. If an announcement is made on on the 4th of the month, it won't show up in LG until the next month. For LJ, it's even longer because there's the lead time needed to get the magazine in print, etc.
We could, of course, just ignore all news related issues and stick with technical articles only, but then we wouldn't be getting our opinions out there.
What would be nice is if these companies would tell us 2 months in advance so we could have the stories in print in LJ at the same time the announcement is made. But this isn't likely to ever happen--insider information and all that.
One of the reasons that I put the article in LG was to get it out a bit quicker than it will appear in LJ. You are not the only one that has made this particular complaint. However, I ask that you all cut us a bit of slack--we are not a daily newspaper.
As to your second point, I noted that these licenses were not the same as GPL--only a step in the right direction. Thanks for writing, --Editor)
Date: Wed, 06 Jan 1999 23:07:49 +0100 (CET)
From: jfm2@club-internet.fr
Subject: Destroying the Kernel Compiling Myth
Once again we find an article propagating the myth of kernel compiling (the one written by a guy from India). Problem is that since 1996 benefits of this are nearly nil in a well designed distribution.
I think this myth is very harmful to Linux: as long as there will be people claiming "Thou hast to recompile thee kernel" it will be impossible to attract non-hackers to Linux. That means confining Linux into a _small_ programmer's ghetto.
The MIME attachment is an analysis of the benefits of compiling a 2.0 kernel. It is based on performance measures, simple maths and source reading. Quantitative analysis shows there are ways far more effective for optimizing a Linux box. I talk about them but that should be developed. The text will be part of the Independence distribution. If you think it is not acceptable for LJ to publish something that will be on a web site in a few days then publish on Linux Gazette.
--
Jean Francois Martinez
Project Independence: Linux for the Masses,
http://www.independence.seul.org
Date: Thu, 07 Jan 1999 23:30:20 -0500
From: Jim Heyssel,
jheyssel@bellatlantic.net
Subject: Make Linux Better, Yet!
I am happy with your site. I am suggesting some improvements to Linux itself which would make it the enterprise software of the next decade.
1. Give Linux full journaling, unlimited file-size, and scalable multiprocessor support. Whether using ext2 with new 64-bit fs, or writing an integrated driver for making ufs, or xfs, or ntfs, it does not matter.
2. Incorporate full IPV6 support. Incorporate complete networking interfaces with NT, Novell, Mac, other UNIX systems. A lot of support is already there, but I am particularly interested in Network Directory Service type support and Domain control support with one login.
3. Fully integrated KDE desktop environment - when you install application software, it should be on the desktop and automated for dummies. Not everyone is a hacker. But everyone who uses computers for the sake of interests other than the computer itself (unlike many of us Linux geeks), should be able to download and install any application without having to read an inordinate amount of documentation or worry about configuration files (unless, of course, we enjoy that sort of thing).
4. Multi configuration automation for distinct uses - e.g. an enhancement like Red Hat's for various types of use: server, router, desktop workstation, database server, etc.
5. Software that deliberately aims at inter-operability with file formats generated by Microsoft, Apple, and other popular software applications.
6. These goals can easily be achieved in the next year and make Linux number one, with a combination of features to entice the most innovative of hackers, and most mundane of end-users.
7. Tell me where to begin. If anyone else is interested in any one of the above, I would like to collaborate.
--
Jim
Date: Mon, 18 Jan 1999 18:58:47 -0600
From: Brian Bray, ixnay@wws.net
Subject: Jan, 99 article Xwindows vs. w95/98/NT
first let me say that I love you :-)~ secondly in your article from Jan 199 entitled X Windows versus Windows 95/98/NT: No Contest, by Paul Gregory Cooper. he states that that...
"Windows95/98/NT on the other hand is a different kettle of fish. Here the OS, GUI, WM, and desktop aren't clearly separated (as in UNIX) but are all rolled into one. Thus you have whatever choice Microsoft happen to give you, i.e. windows themes.
For Microsoft this is an advantage - it stops people butting in and rewriting parts of their OS which could potentially lose them money. For instance they realized that with the old windows 2/3.1 you could simply replace MS DOS with another compatible DOS such as DR DOS from Caldera. In an ongoing court case Caldera allege that MS added code to windows to make it seem like there was a bug in DR DOS. With 9*/NT being all rolled in one there is no need to resort to such tactics. "While I agree that everything that this article states I would like to point out that users of Windows 95/8/NT can indeed change there shell to a Afterstep like interface called Litestep.
http://www.multimania.com/jdubois/litestep/index.htm
I have not personally ever used but know ppl who have. And it doesn't look to bad.
Thanks for your time,
--
Brian Bray
Date: Fri, 15 Jan 1999 05:55:47 -0800 (PST)
From: Casper Boden-Cummins,
casperbc@yahoo.com
Subject: X
Here's a top tip: the popular X Window System is _not_ called `X Windows'. There is no such product. Instead, the man page on X says:
The X Consortium requests that the following names be used when referring to this software:
X
X Window System
X Version 11
X Window System, Version 11
X11
I'd be overjoyed if we could ditch this M$-inspired mistake! ;-)
--
Casper Boden-Cummins
Date: Thu, 21 Jan 1999 10:33:35 -0500
From: Michael Bright,
mabright@us.ibm.com
Subject: How about a cross platform section?
This is to the Gazette as well as Linux Journal. From what I've seen in the industry, most businesses are using Linux in a heterogeneous environment. They are doing this because they don't want to jump into Linux with both feet. A lot of these are NT/Linux houses which leverage the abilities of both platforms to get the job done. This could be anything from a collection of tips to entire articles. I see NT/Linux related questions and tips in almost every issue. Maybe its time they were put in the same section. I even have an Idea for a logo or symbol, Take a Yin Yang and put a Windows emblem in the space for the white dot and a penguin for the black. The black background could be made to resemble the NT workstation package with the "edge of space" graphic and perhaps the penguin could be in an arctic scene.
This idea does not have to be limited to just Linux and NT, there are connectivity issues for Apple, OS/2/Aurora, Novell and others.
Thanks for your time.
--
Michael
Date: Thu, 21 Jan 1999 13:25:02 +0000
From: Me, deltax@pragma.net
Subject: Quark Xpress on WinDos?
Quark Xpress was originally a Mac product.
I was unaware that it was ported to winferior systems.... Indeed it would be very nice to have Quark under UNIX. From what I remember using it (long time ago, old version!) it was a very nice, efficient and powerful page design software.
--
Eric
If all you need is the ability to telnet into your Linux box, there is a simpler way (assuming your ISP gives you a Web site with CGI). First, create a script on your site called "updateIP.cgi":
#!/bin/bash echo $REMOTE_ADDR >latestIPand another called "telnet.cgi":
#!/bin/bash echo Location: telnet://`cat latestIP` echo(Don't forget to make the CGI scripts executable.) Set up a cron job that will do "lynx -source http://www.example.com/~foo/updateIP.cgi >/dev/null 2>&1" every 15 minutes (or whatever). (Replace http://www.example.com/~foo/ with the URL of your site, of course.) Now you can set yourself a bookmark for "http://www.example.com/~foo/telnet.cgi"; when you go to it, your browser will be redirected to the telnet: URL and will (should) fire up a telnet session.
No need to pay somebody for Dynamic DNS or a domain name. If your ISP doesn't support CGI, you can probably hack up something with FTP instead.
--
John (Francis) Stracke
Another way to make it boot SCSI first is to install the IDE drive on the *secondary* IDE controller, not the primary. Whether this works or not depends on the BIOS and the SCSI card.
--
DJ
my 0.02 euro:
This is a problem with the net-tools used. The /proc/net format changed during 2.1.x development and old net-tools just can't grok it. The 2.1.x Documentation/Changes file states version and location of the net-tools you need to get correct results: for 2.2.0-pre4 it's v1.49.
It's generally a Good Thing to check Changes after patching the kernel tree. There are more things you need to consider when running a 2.[12].x kernel on a 2.0.x distribution, and Changes has the details.
Linux Gazette is a useful piece of work. Thanks!
--
Michel van de Ven
I read your article about booting linux and NT. I have a triple booting solution for you. I read this in the Jan99 PC@uthority so I can't claim the credit on this much
I recently saw a suggestion for triple booting NTFS, FAT32 and linux. Well here's a quick tip: Linux can be put into the NT boot menu. To do so, run lilo to create a boot sector of your linux partition, then run:
dd if=/dev/hdc1 of=/dev/hda/bootsect.lnx bs=512 count=1Replace /dev/hdc1 with your linux partition and /dev/hda/ with your mountpoint of your "C:" drive under NT. This copies your linux boot sector to a file which NT reads as C:\BOOTSECT.LNX. Then append C:\boot.ini with
c:\bootsect.lnx="linux"Reboot, and linux should work off the NT boot menu.
this is the article I saw word for word and found that it didn't work so here is a version that does. I did this before converting Win98 to FAT32 First make sure that the "C:\"partition is mounted
mount -t msdos /dev/hda1 /mnt/win98then reference it /mnt/win98 in the place of the /dev/hda1, so the line should look like this
dd if=/dev/hdc1 of=/mnt/win98/bootsect.lnx bs=512 count=1I found that is worked.
--
Peter deVries
Here is a two cent tip that I have been meaning to submit for a long long time now.
If you have a large stack of CD-ROMS, finding where a particular file lies can be a time consuming task. My solution uses the locate program and associated utilities to build up a database of the CDs' contents that allows for rapid searching.
First we need to create the database, the following script does the trick nicely.
#!/bin/bash onedisk() { mount /mnt/cdrom find /mnt/cdrom -maxdepth 7 -print | sed "s;^/mnt/cdrom;$1;" > $1.find eject -u cdrom } echo Enter name of disk in device: read diskname while [ -n "$diskname" ]; do onedisk $diskname echo Enter name of next disk or Enter if done: read diskname done echo OK, preparing cds.db cat *.find | sort -f | /usr/lib/findutils/frcode > cds.db echo Done...Start with no CD mounted. Run the script. It will ask for a label for the CD, a short name like "sunsite1" is best. It will then quickly scan the CD, eject it and prompt for another. When you have exhausted your collection just hit enter at the prompt. A file called cds.db will be done. To make it simple to use copy cds.db to /var/lib (or anywhere else, that is where locatedb is on my system). Now create an alias like
alias cdlocate="locate -d /var/lib/cds.db"Now if I type "cdlocate lyx" I get
debian20_contrib/debian/hamm/contrib/binary-i386/text/lyx_0.12.0.final-0.1.deb debian20_contrib/debian/hamm/contrib/binary-m68k/text/lyx_0.12.0.final-0.1.deb debian20_contrib/debian/hamm/contrib/source/text/lyx_0.12.0.final-0.1.diff.gz debian20_contrib/debian/hamm/contrib/source/text/lyx_0.12.0.final-0.1.dsc debian20_contrib/debian/hamm/contrib/source/text/lyx_0.12.0.final.orig.tar.gz lsa3/apps/wp/lyx-0.12.0-linux-elf-x86-libc5-bin.tar.gz lsa3/apps/wp/lyx-0.12.0.lsm lsa3/apps/wp/lyx-0.12.0.tar.gz lsa4/docs/french/www.linux-france.com/lgazette/issue-28/gx/lyx lsa4/powertools/i386/lyx-0.12.0-1.i386.rpm lsa4/powertools/SRPMS/lyx-0.12.0-1.src.rpm openlinux12/col/install/RPMS/lyx-0.11.32-1.i386.rpm openlinux12/col/sources/SRPMS/lyx-0.11.32-1.src.rpm suse53/suse/contents/lyxIn order to prevent locate from warning you that the database is old try touch -t 010100002020 /var/lib/cds.db to set the modification date to January 1 2020.
--
Reuben
My English is terrible,so feel free to correct if you decide to publish...
Hello,i am a French linuxer and here is my two cent tips. If you have many CD-ROMs and want to retrieve this_file_I'm_sure_i_have_but_can't_remember_where, it can helps.
It consist of 2 small scripts using gnu utilities: updatedb and locate. Normally 'updatedb' run every night, creating a database for all the mounted file systems and 'locate' is used to query this system-wide database.But you can tell them where are the files to index and where to put the database.That's what my scripts does:
The first script (addcd.sh) create a database for the cd actually mounted.You must run it once for every cdrom.
The second ( cdlocate.sh ) search in the databases created by addcd.sh and display the cdname and full path of the files matching the pattern you give in parameter. So you can search for unmounted files !
To use:
mkdir /home/cdroms cp addcd.sh cdlocate.sh /home/cdroms
mount /mnt/cdrom( if your mount point is different , you must adapt the script )
./addcd.sh Linux.Toolkit.Disk1.Oct.1996It will take some time to updatedb to create the databases specially if the cdrom contain many files.
./cdlocate.sh '*gimp*rpm'
Hope this help and happy linuxing !
---Cut here------------------------------ # addcd.sh # Author: Jose-Luc.Hopital@ac-creteil.fr # Create a filename's database in $DATABASEHOME for the cd mounted # at $MOUNTPOINT # Example usage: addcd.sh Linux.Toolkit.Disk3.Oct.1996 # to search the databases use cdlocate.sh CDNAME=$1 test "$CDNAME" = "" && { echo Usage:$0 name_of_cdrom ; exit 1 ; } # the mount point for the cd-ROM MOUNTPOINT=/mnt/cdrom # where to put the database DATABASEHOME=/home/cdroms updatedb --localpaths=$MOUNTPOINT --output=$DATABASEHOME/$CDNAME.updatedb && \ echo Database added for $CDNAME ---Cut here-------------------------------- # cdlocate.sh # Author : Jose-Luc.Hopital@ac-creteil.fr # Usage $0 pattern # search regular expression in $1 in the database's found in $DATABASEHOME # to add a database for a new cd-rom , use addcd.sh test "$*" = "" && { echo Usage:$0 pattern ; exit 1 ; } DATABASEHOME=/home/cdroms cd $DATABASEHOME # get ride of locate warning:more than 8 days old touch *.updatedb CDROMLIST=`ls *.updatedb` for CDROM in $CDROMLIST do CDROMNAME=`basename $CDROM .updatedb` locate --database=$DATABASEHOME/$CDROM $@ |sed 's/^/'$CDROMNAME:'/' done
Don Cramer wrote:
I was wondering if Linux now has, or will support any of the multimedia formats supported by Windows, such as AVI, JPG, WAV, MOV, etc?Yes, all of these are supported in various ways. Animated formats (AVI, MOV, animated GIFs, etc) are supported through the xanim program, along with a host of other tools (xanim just has the widest range of animation format support). Xanim also has support for playing some types of audio embedded in the video file (such as audio that accompanies an AVI file). Sound formats (WAV, AU, etc) are supported via the "sox" program (that plays these formats) and the Linux sound drivers (which you can get either in the Linux distributions or a commercial version which supports a wide range of sound cards and is available from 4Front Technologies for about $20US). Static formats for graphics images (JPEG, GIF, TIFF, TGA, etc) are supported by lots of tools: the GIMP (GNU Image Manipulation Program, which is similar to Photoshop), xv (which is like LView), ImageMagick and NetPBM (which are both a collection of graphics viewer/manipulation tools). There are lots of tools for viewing/listening to multimedia files. You can try the Linux Multimedia Pages (I've forgotten the URL but I think its listed on SSC's Resources pages) and my Linux Graphics pages at www.graphics-muse.org/linux.html.
Multimedia on Linux is probably not quite what you're used to on Windows as far as how you use them, but the support for most of the well known and well used formats is available. What you can't do (at least I doubt you can) is run multimedia programs from CDs that are Windows specific programs. Those programs won't run (well, they might under WINE but I've never tried them) but their support files may be readable by some of the Linux/Unix programs I've mentioned above.
--
Michael J. Hammel, The Graphics Muse
The distinction between Linux and UNIX is, at this point, only in name. UNIX is a trademark of the X/Open Group and requires a fee for branding a product as a flavor of UNIX. Some vendors have considered getting UNIX certification for their particular brand of Linux, but I haven't heard of any of them actually doing it. Linux does, however, support the POSIX standards and others required for the UNIX branding, so it could be considered a flavor of UNIX even if it isn't quite official.
--
Drew
you asked:
I am a 2nd year computer science student. I have looked everywhere for the answer and found only basic answers. My question is what exactly is the difference between Linux and UNIX, excluding size and speed. I would appreciate it if you could just send me a few of the differences.
For all intents and purposes Linux *is* Unix -- ie. it is another unix variant. UNIX is not a single operating system, anyway. It is now a brand managed by the Open Group. That means that Operating System vendors (or Linux distribution vendors) may apply for Unix certification and branding. They pay money and TOG runs a bunch of tests and basically says, "ok, that's unix."
Of course, there are other relevant standards, such as POSIX. No standard fully covers the differences between branded or unbranded Unix implementations.
My question to you is, which unix variant are you referring to? There are so many, Solaris, HP-UX, Digital Unix, AIX, SCO, and BSDI, to name some common ones. SCO is sometimes thought of as the main UNIX as it is the direct descendent of AT&T's original System V source.
Of course, the BSD (Berkeley) derived variations play a pivotal role in Unix history as well. All of the Unix variant's mentioned above including Linux incorporate functionality and ideas from both primary Unix flavors as well as incorporating their own ideas.
System V (SCO) style unix, for example, has a different boot structure than BSD. Most recent Linux distributions use System V style boot scripts. But Linux systems also incorporate BSD style printing mechanisms. The GNU command-line tools used on Linux systems are much enhanced and extended versions of their System V and BSD counterparts. GNU ls has many more options than what many unix vendors may ship. To further confuse the issue, GNU tools can be used to replace vendor-supplied commands if desired.
Are we having fun yet?
Your best bet is to read up on Unix history to understand why unix (small u) is not one Operating System but a family of Operating Systems with similar characteristics. Filesystem structure and permissions, basic commands, process sheduling, boot method and dozens upon dozens of other characteristics add up to define an OS as "unix". Linux falls quite handily into this family despite the lack of (expensive and arguably meaningless) Open Group unix branding.
See Unix Guru Universe for some more info http://www.ugu.com/
Also see the geek-girl site for some more history and info
http://www.geek-girl.com/unix.html
--
Omegaman
In your letter to Linux Gazette #36, you wrote:
I have a Linux box, with SuSE, and a Lotus Notes server. I want to e-mail the status of my workstation to another user that belongs to the Notes Network. Does anybody know how to do that, or just the concepts to do this?
Just pipe the output of a command to mail. For instance, I have a cron job that mails a weekly status report to the members of my workgroup. This helps remind the boss that the Linux box is stable and doing useful work.
Assuming you want to do something simple like uptime, the command line would look like:
/usr/bin/uptime | /bin/mail -s "Uptime Report" me@my.addressThe script I run is a little more complex because it gathers statistics from various logs:
#!/bin/bash # # Script: wsr (Weekl;y Status Report) # # Purpose: Summarize the relevant activity of the server for the past week. # # Author: Anthony E. Greene agreene@pobox.com # echo " " echo "Uptime" echo "------" /usr/bin/uptime echo " " echo "Mail Transactions" echo "-----------------" MAILSENT=`/bin/grep -c "stat=Sent" /var/log/maillog.1` MAILRCVD=`/bin/grep -c "from=" /var/log/maillog.1` MAILCOUNT=$[$MAILSENT+MAILRCVD] MAILRATE=$[$MAILCOUNT/24/7] echo "$MAILCOUNT ($MAILRATE transactions per hour)" echo " " echo "Web Documents Served" echo "--------------------" WEBCOUNT=`/bin/grep -c " 200 " /var/log/httpd/access_log.1` WEBRATE=$[$WEBCOUNT/7] echo "$WEBCOUNT ($WEBRATE transactions per day)" echo " " # End of ScriptThe cron job is:
/usr/local/sbin/wsr | /bin/mail -s "Weekly Status Report" staffThe "staff" email address is a sendmail alias that points to the actual email addresses of the members of the workgroup. As long as outgoing mail works, this will do what you need.
--
Anthony E. Greene
There's a program called imwheel that supposedly does this in XFree86, although I haven't tried it myself. Its homepage is http://solaris1.mysolution.com/~jcatki/imwheel/ and the freshmeat appindex for it is http://freshmeat.net/appindex/1998/08/15/903164189.html
--
Drew
Well, the short answer is "yes." :) There are a number of Linux applications that can view and/or edit these types of files.
The Gimp ( http://www.gimp.org/ ) can edit almost every graphics format known to man, and could be considered a good alternative to Photoshop. You can see quite a few others at http://core.freshmeat.net/appindex/x11/graphics.html
In regards to the video formats, XAnim ( http://xanim.va.pubnix.com/ ) can view most of these without any problem.
As for sounds, there are a plethora of programs for doing almost
anything you could think of that involve sounds. For starters, take a
look at http://core.freshmeat.net/appindex/console/sound.html and
http://core.freshmeat.net/appindex/x11/sound.html for a few of the
available sound apps.
Have fun.
--
Drew
There's another HOWTO at http://eunuchs.org/linux/ip_masq/ip_masq_content.html I haven't tried setting up IPMasq myself, so I'm not sure how much this good this will do, but I hope it helps a bit.
--
Drew
This one's pretty easy. If you're lucky, your settings are only corrupted. This is fixed by removing the ".netscape" (or just "netscape", without a leading dot, I'm not sure which offhand) directory from affected users' home directories.
If Communicator itself is broken, you can remove the /usr/local/netscape directory and reinstall Netscape from the .tar.gz file that I assume you downloaded. If you installed it from an RPM or some other sort of package, I would read the manpage for the package manager and remove it using rpm or dpkg or what have you. Good luck.
--
Drew
I noticed in your mailbag several letters talking about errors on network devices.
The correct answer is to upgrade the net-tools package. The format of many /proc files has changed. In particular, those used by ifconfig. I recommend browsing through linux/Documentation/Changes for everyone. I would even suggest it be mandatory reading. =)
--
David
This appeared in Jan '99 issue:
From: James Jackson
Does anybody know how to enable the wheel on an Intellimouse under Linux? (Red Hat 5.2)
I am sending this to gazette as well, because it might be of general interest.
Look at
http://www.inria.fr/koala/colas/mouse-wheel-scroll/
He might be able to help you.
--
Torben
You wanted to get rid of "Start" in fvwm95. Edit your .fvwm95rc like this:
*FvwmTaskBarAutoStick # here I changed Start to Linux *FvwmTaskBarStartName Linux *FvwmTaskBarStartMenu StartMenu *FvwmTaskBarStartIcon mini-exp.xpm *FvwmTaskBarShowTipsYou might want to have a look at an article I wrote a few months ago:
http://www.ssc.com/lg/issue21/fvwm.html
Regarding the virtual desktop issue, have a look into your /etc/X11/XF86Config. In the screen section look for the keyword virtual. Change it to
Virtual 0 0to switch off the virtual screen.
--
Gerd
Contents: |
The March issue of Linux Journal will be hitting the newsstands February 11. This issue focuses on Internationalization and Emerging Markets with articles on multilinual Emacs, printing messages in different languages, autonomous automobiles in Italy and mediated reality. This last is the second part of Dr. Steve Mann's series on wearable computers. Linux Journal now has articles that appear "Strictly On-Line". Check out the Table of Contents at http://www.linuxjournal.com/issue59/index.html for articles in this issue as well as links to the on-line articles. To subscribe to Linux Journal, go to http://www.linuxjournal.com/ljsubsorder.html.
Date: Thu, 24 Dec 1998 01:00:31 -0800
OzSearch extends its offer to the Australian Linux community to LG
Australian readers as well:
OzSearch Internet Guide, an all-Australian web directory, recently released its new web site. The site is intended to offer a starting point for any complete search for Australian web sites. In addition to successfully running Linux Red Hat 5.2 (where 100+ days of uptime are common), the site is powered by Apache v1.3.3 with ModPerl and MySQL. Stress tests have indicated that this configuration scales exceptionally well.
To give back to the Linux community, OzSearch is currently seeking to help sponsor an Australian-based Linux users group. Please provide your group's information to Kris Duggan (kduggan@ozsearch.com.au).
OzSearch can be found at
http://www.ozsearch.com.au
For more information:
Kris Duggan, President of OzSearch Internet Guide,
kduggan@ozsearch.com.au
Date: Thu, 07 Jan 1999 09:55:07 -0400
A major free and open source
software event is a convention entitled the Bazaar. It will have
over 5,000 attendees and 100 vendors. The speaker list includes major
free software developers and advocates like Eric Raymond, Richard
Stallman, and Alan Cox. The Bazaar is the first convention of its kind
to ever be held in New York city and we are very excited for the
Bazaar's maiden voyage. It will be opening on March 13th and continuing
through the 15th at the Jacob Javits Center in Manhattan.
For more information:
Eddie Park, Assistant Director
eddie@inlimine.org,
http://www.thebazaar.org/
Date: Wed, 27 Jan 1999 09:25:58 -0800 (PST)
Full tutorial and technical session programs, and online registration,
are now available at http://www.usenix.org/events/ for the following:
NORDU99 - 1st Nordic Europen/USENIX Conference, February 9-12, 1999, Stockholm, Sweden
OSDI: 3rd Symposium on Operating Systems Design and Implementation, February 22-25, 1999, New Orleans, Louisiana
1st Conference on Network Administration, April 7-10, 1999, Santa Clara, California
----Co-located and sharing two days of tutorials with:
1st Workshop on Intrusion Detection and Network Monitoring, April 9-12, 1999, Santa Clara, California
COOTS: 5th Conference on Object-Oriented Technologies and Systems, May 3-7, 1999, San Diego, California
Workshop on Embedded SyStems, March 29-31, 1999, Cambridge, Massachusetts
LISA '99--13th Systems Administration Conference, November 7-12, 1999, Seattle, Washington
Tcl/Tk: 7th USENIX Tcl/Tk Conference, February 14-18, 2000, Austin, Texas
For more information:
http://www.usenix.org/events/,
Job at Cincinnati Bell, Cincinnati, Ohio. Administration of Linux servers and development.
Position Profile:
This position will work closely with the Internet Services team to provide support for the growing number of IP applications within the Internet Access product. Will have a direct impact on the customer's perception of the quality of the Internet Access product as well as the quality of any IP services within the Internet Access product.
Process and Technical Knowledge:
StarOffice 5.0 Personal Edfition Report:
http://macarlo.com/
World's Smallest Computer Runs Linux:
http://wearables.stanford.edu/
Date: Mon, 11 Jan 1999 15:32:04 -0800 (PST)
The recently released "UNIX CD Bookshelf" contains six O'Reilly books
plus the software from "UNIX Power Tools" -- all on a convenient
CD-ROM. A bonus hard copy book of the bestselling "UNIX in a Nutshell:
System V Edition", is also included.
The six included books, purchased separately, would retail for
$175.70, but "The Unix CD Bookshelf" package retails for only $69.95.
The CD-ROM contains the complete text of:
A free sample chapter, Chapter 2: UNIX Commands from "Unix in a Nutshell", is available at: http://www.oreilly.com/catalog/unixcd/chapter/index.html
For more information: http://www.oreilly.com/catalog/unixcd/
Date: Fri, 8 Jan 1999 17:18:51 -0700
ISS Ships Industry's First, Integrated
Network and Host-Based Intrusion Detection Solution
ATLANTA, Ga. - January 7, 1999 - Internet Security Systems (Nasdaq: ISSX), the leading provider of adaptive network security solutions, today announced the worldwide availability of RealSecure 3.0, a solution that combines both network- and system-based intrusion detection and response capabilities to form a single enterprise threat management system. By adding host-based intrusion detection capabilities to RealSecure, customers can have the best of both worlds: fast detection of attacks at the network level stopping security breaches before damage is done, as well as identifying unauthorized access attempts at the system level.
For more information:
Nicki Kopelson,
nickik@connectpr.com
Ottawa, Canada=97January 13, 1999, Corel Computer, a division of Corel Corporation, today announced the availability of the NetWinder Group Server, the latest addition to their family of NetWinder thin servers.
The NetWinder Group Server offers departmental workgroups and small businesses a wide range of Internet/intranet services in an easy-to-use, affordable package. Based on the StrongARM=AE RISC microprocessor and the Linux operating system, the NetWinder product family delivers powerful, cost-effective desktop and server solutions.
The NetWinder Group Server with 32 MB RAM carries a suggested retail price of US $979 for the diskless version, US $1,339 with 2 GB hard drive, US $1,629 with 4 GB hard drive and US $1,839 with 6 GB hard drive. Prices subject to change without notice. Dealers may sell for less.
The NetWinder Group Server provides a full suite of Internet/intranet services, including:
For more information:
http://www.corelcomputer.com/
OAKLAND, Calif - January 18, 1999 - TurboLinux v3.0.1, the first version of the popular Linux distribution to be sold as a boxed set, is available today. TL 3.0.1 will offer a comprehensive installation guide and manual, is priced at $49.95 and can be ordered at http://www.turbolinux.com/orders/.
TurboLinux, the most popular distribution in Japan, if not Asia, has begun a large transition into the U.S. Market. Pacific HiTech (PHT) has been a major part of the Linux community for years, previously acting as the distributor for RedHat and still as the Japanese distributor for all major Linux distributions. PHT recently opened it's new US offices in Oakland, CA and is working on more focused Linux products, beginning with TurboLinux Server, slated for release in the first half of '99, followed by other releases, including TurboLinux 4.0 in early summer '99.
For more information:
Justin Ryan, CEO, Senior WebMaster - PCHelp,
http://computers.iwz.com/, webmaster@computers.iwz.com
LAS VEGAS, NV-Informix Partner Forum-January 19, 1999-Informix Corporation (NASDAQ: IFMX), the technology leader in enterprise database-powered solutions and award winning Linux vendor, today announced the overwhelming success of its holiday Linux promotion and ongoing Linux program. International distribution of Informix products on Linux has exceeded expectations with more than 175,000 copies of Informix databases on Linux distributed over the last six months. In response to this overwhelming demand, Informix has increased the global availability of its market-leading Linux portfolio through two strategic alliances with leading Linux distributors Red Hat Software and SuSE. These distribution channel alliances give the company even greater penetration into the rapidly growing worldwide Linux community and make access to Informix products even easier for Linux enthusiasts and business users. These alliances make Informix's Linux products available for download from both vendors' Web sites and demonstrate Informix's unmatched commitment to the Linux platform.
A free development copy of Informix's database is bundled with SuSE's new 6.0 release of Linux. Available in Germany today, SuSE 6.0 will be stocked on U.S. retail shelves for a price of $49.95 by the end of January. The product bundle is currently available from the SuSE FTP site. Informix users will need to register the product online with Informix (http://www.informix.com/register4suse), to receive the free development license.
Informix Dynamic Server, Linux Edition Suite is available for download from the Red Hat Web site (http://www.redhat.com). Informix users will need to register the product online at the Red Hat web site, to receive the free 30-day license.
For more information: http://www.informix.com/
Date: Thu, 14 Jan 1999 17:07:39 -0600
SUNNYVALE, CAFastlane Software Systems, Inc. announces the release of its Xni
network analysis, security and accounting package on the Linux platform. Xni
is a comprehensive, easy-to-use, software-only solution that monitors every
conversation between hosts in real time, producing a concise graphical view of
network usage and traffic flow without the heavy resource drain and
limitations
of SNMP/ARMON tools or the dedicated hardware typically required of network
analyzers.
Compact data format permits 7-day, 24-hour reporting.
For administrators concerned with tracking DNS performance, Xni uses DNS/Yellow Pages to closely monitor DNS/BIND entries for all hosts it sees and reports all devices that have no DNS entry or result in a timeout.
Xni can identify the activity of all network hosts in real time or over time. Applications can be tracked either individually or in groups. The system can be configured to monitor traffic and respond to alarms in intervals as small as one second. Findings are presented as an easy-to-read combination of graphs, charts and lists.
On-the fly HTML reporting permits access with a standard browser Xni features on-the-fly HTML reporting that allows administrators to create reports on network traffic usage and view them from any machine using a standard browser.
For more information: Fastlane Software Systems, http://www.xni.com/
Date: Tue, 19 Jan 1999 09:30:26 -0500
WESTBORO, Mass.--(BUSINESS WIRE)--Jan. 19, 1999--Applix Inc.,
a leader in front office business solutions, today announced support
for a new platform for its market leading suite of decision support
applications. Applixware, Applix's integrated office suite, will run on
Apple's Power PC based computers running the Linux operating system. In
addition, Applix will be selling the product with a bundled version of
LinuxPPC's Linux operating system.
Applixware is a graphical suite running natively under Linux and includes Applix Words, Applix Graphics, Applix Presents, Applix Spreadsheets, Applix Mail, Applix Data, Applix HTML Author and Applix Builder, a visual, object oriented, rapid application development tool that provides full programmability and customization for the suite.
LinuxPPC Inc., headquartered in Madison, WI, distributes the leading Linux distribution for the PowerPC platform. LinuxPPC has been working closely with Applix to raise awareness of the suite's availability on the platform, and has recently announced that the operating system will run on Apple's successful iMac product.
For more information:
Applix, Inc., http://www.applix.com/
Orem, UT, January 18, 1998, BASCOM today announced the availability of its Internet Communications Server (ICS), an educational software/hardware solution developed for the OpenLinux OS from Caldera Systems Inc. Having successfully deployed ICS at key regional sites, BASCOM will now make it available to K-12 schools through the third largest hardware OEM and accompanying reseller channels. BASCOM's use of OpenLinux provides the education vertical market with its first Linux-specific application. While providing a secure and easily transportable platform for future alliances, BASCOM's decision to use OpenLinux was based on the unique needs of the education community: needs that fell directly under Caldera Systems' focus on Linux-based business solutions: stable, proven, tested and supported.
For more information:
BASCOM Global Internet Services, Inc.,
http://www.bascom.com/, info@bascom.com
Caldera Systems Inc.,
http://www.calderasystems.com ,
linux@calderasystems.com
Kearny, NJ. - January 26, 1999 - Servertec today announced the availability of a new release of iServer, a small, fast, scalable and easy to administer platform independent Web/Application Server written entirely in JavaTM.
iServer is the perfect Web Server for serving static Web pages and a powerful Application Server for generating dynamic, data driving Web pages using Java Servlets, iScript, Common Gateway Interface (CGI) and Server Side Includes (SSI).
iServer provides a rich environment for building and deploying cross platform Web-based business critical Internet and Extranet applications. iServer is also a robust, scalable platform that individuals, work groups and corporations can use to establish a Web presence.
iServer preview release is available for free at http://www.servertec.com/ (connect-time charges may apply).
For more information:
Servertec, http://www.servertec.com/
Manuel J. Goyenechea,
goya@servertec.com
Well, the 2.2 kernel is finally out. Indeed the 2.2.1 patch has also made its way onto the scene (you just knew they'd find something worth fixing in the first week).
If you're considering upgrading you'll want to look through the list of required/suggested package upgrades to go with that. Although most code in userspace isn't affected much by kernel changes there are always some utilities and applications that will be.
Of course, you can install a new kernel right along side your existing one --- and reboot between them with glee. Remember LILO is a multi-boot utility as well as a boot loader --- so you can easily add new entries to it.
Thus upgrade will be much easier than the migration from 1.2 to 2.0 (when the structure of many /proc interfaces changed --- breaking the 'ps' related utilities). That's good since there are probably close to ten times more Linux users now.
Of course the faint-hearted can just wait for their friendly distribution maintainer to put out an all new version with the 2.2.x kernel and all the new utilities pre-built. However, what would the fun be in that.
To learn more about upgrading your kernel look LinuxHQ (http://www.linuxhq.com/). They have about a half dozen links to pages on the subject (particularly with lists of requisite package upgrades and links to the tar.gz files and even one site that has links to the requisite RPMs).
After you upgrade you'll want to keep you eyes on those sites, checking back over the next couple of months. There will probably be other packages that are found "wanting" (unready for 2.2).
If you get that all installed, read all my rantings for this month and are still bored --- take a look at the "Linux Tips & Tricks" site (http://www.patoche.org/LTT/) and considering adding your own suggestions to the mix.
I added a couple myself. I also suggested to the site maintainer that he link to LG's "2-cent Tips" and to the Linux-Tips HOWTO (http://metalab.unc.edu/LDP/HOWTO/Tips-HOWTO.html).
While we're on the subject of "tips" here's one for you budding shell scripters and programmers out there:
If you have to use /tmp --- do it safely. Sure, you script is running on a single-user workstation now. But eventually you'll use it on a multi-user machine or someone will copy it. There are all sorts of nasty tricks people can play on you involving symlinks in /tmp.
Here's one way:
TMPD=/tmp/$0$$$(date +%s) ## get a (hopefully unique) name ## use any reasonable method for this. OMASK=$(umask) umask 077 || exit 1 mkdir $TMPD || exit 1 trap 'rm -fr $TMPD; exit' 0 umask $OMASK
... this should either successfully make a safe, private directory under /tmp (and you use $TMPD for the rest of your temporary file operations --- using whatever names you want) or it should fail. There should be no race condition since the new directory should be made with the appropriate permissions in a single system call (and my strace output under Linux/bash confirms that).
The part to be careful of is the 'trap' clause. That should automatically remove the temp directory and files on exit (normal or in response to any trappable signals). (If you use a kill -KILL on that script while it's running --- it won't get a chance to clean up after itself, but a normal [Ctrl]-[C] and most other kill signals should be fine. I still suggest using your own private ~/tmp directory whenever that's feasible (but not if your $HOME is served over NFS).
I'll be teaching a class in shell scripting at Mission College (Santa Clara, CA) starting tomorrow. That should be interesting.
From Mark F. Johnson on Mon, 04 Jan 1999
Greetings Honorable Answer Guru,
I have been helping a friend of mine set up RedHat Linux on his system (dual-boot with Windows98). He has a Diamond Supra PCI Voice modem, which is set up on Com 3 but has an IRQ of 11. (I know, I know, it's bizarre, but that's the way it is.) His modem works fine in Windows, but Linux wants to assign it IRQ 4, of course. The modem is apparently configured to use IRQ 11 and the IRQ can't be changed in Windows. I have tried using the "setserial" command and was successful with changing the IRQ, but the modem still won't initialize, and rebooting the system resets the IRQ to the default. I've only been into Linux for about a month, so I'm no expert in the fine art of script writing. I am willing to try, if someone like yourself might give me a starting point and head me off in the right direction. Any ideas/suggestions wil be greatly appreciated.
Last I heard the Diamond/Supra PCI modems were of the "winmodem" variety. They don't work under MS-DOS, or Linux (only under Windows --- probably only under Win '95 and Win '98, maybe they have an NT driver, too).
So you should probably return it. Then go back through last year's "Answer Guy" and search for the word modem. Almost ever problem that has been reported about any internal modem as been that it was a winmodem.
Hopeless!
(If Diamond claims it is not --- then boot from a plain old DOS floppy and get it to to dial the phone using an old shareware copy of Telix, Procomm, QModem, or any other MS-DOS program. If that works, there's hope. Otherwise BURN IT!)
(If it really isn't a winmodem then try disabling "plug and play" in your BIOS and/or play with the pciutils package (available at Linux sites --- search http://www.freshmeat.net for that).
From Mark F. Johnson on Fri, 08 Jan 1999
Greetings Once Again Honorable Guru, You were, of course, right on target with your previous assesment of my modem woes in regards to what was indeed a WinModem. I had my friend go to the same dealer from whom I bought my modem, an A_Open FM-56. He bought an installed what was supposedly the same modem, but again, no joy. Come to find out that A_Open's current line of PCI modems, including both FM-56 models, are all WinModems. The DOJ may be on to something afterall. To make a long and boring story short, my friend is going to buy an external modem. To save time and continued harassment of your Honorable self, may I implore you to recommend a moden that will work equally well with Windows and Linux? Much appreciation for your assistance.
Any external modem should be O.K. --- I use an older Zyxel 28.8 --- and I've had good luck with the old Practical Peripherals 28.8 fax modems.
However, the model change so fast, and the companies merge and die so often that this is another of those areas where look at the latest Hardware-HOWTO and a poll of your favorite users group, newsgroup or mailing list is probably your best bet.
(I like U.S Robotics Courier series --- but they are expensive. I detest their less-expensive "Sportster" series --- too cheap).
From Mark F. Johnson on Wed, 13 Jan 1999
Greetings Once Again Honorable Answer Guru, Just wanted to drop you a line and thank you for your wise and profound advice. My friend's modem dilemma has been solved. He ended up with a Zoom external 56k that set up easy and works like a charm. Best of all, it was just under $90. Who needs Winmodems? Again, much thanks.
M.
Glad I could help. That is a pretty good price. Now I'll be flooded with requests about where to get them...
From John Radcliffe on Mon, 04 Jan 1999
One thing that might make Linux more attractive for the Desktop market is some clarification of security issues. While I don't consider myself an expert on desktop computer matters, people keep coming to me for assistance and advice so I must not be completely obtuse on the subject. Still I do not understand all that I read regarding Linux security.
I agree. I'll be giving a talk on this subject:
13 Tips for Securing your Linux System from Common Threats
... at the Silicon Valley Linux Users Group (http://www.svlug.org) this week.
If I get my act together I'll set up some web pages with some version of the content of my slides and notes at http://www.starshine.org/linux/security/tips.html
(I've put a placeholder there until my notes are presentable).
If you're in the Silicon Valley (San Jose, California) area --- come to the meeting.
I would like to put together a simplified security guide for people who are not providing internet content or services, but wish to use a web browser from the Linux desktop. But I do not want to give bad advice through my lack of understanding.
The best advice is to disable all local services (deactivate inetd, sendmail, and the local httpd)
Do a 'netstat -na' command to see what ports are "active" on your system. If it reports anything in "listen" mode on any port --- you've still got some networking service listening.
It's a bit more complicated than that. I'll go into more detail a bit later.
One thing which I do not understand is how crackers gain access through SUID root programs. From a look at 'rootshell' and 'bugtraq' there seem to be innumerable ways to do this, and new ones seem to be found daily. Apparently even 'secure shell' isn't immune to exploitation. Rather than have the average desktop user try to keep current with all of these, would it be safe to say that if Telnet, Shell, and Login are commented out in /etc/inetd (and file permissions are correct as per the Linux Security HOWTO) that the desktop users machine would be safe from this type of attack?
To exploit a bug in an SUID program (whether it's owned/run as 'root' or any other user) the attacker must first gain "shell" access or must otherwise trick some service into executing the program. It must also be able to supply that SUID program with some sort of degenerate data (usually input or environment values --- though some exploits occur through signals, shell aliases, etc).
If you are assuming a desktop system which is "owned by" the operator --- that is that you expect any person at the console to have "root" access --- then your primary threat vectors are network/remote exploits (disable services) and trojan horses (or --- very rarely under Linux --- viruses).
In other words if I can already attain root by rebooting into single user mode, I don't need to exploit a bug in some SUID binary to 'get root.' If I get to a shell prompt remotely --- you've already lost (there are too many opportunities for me to violate too many security policies --- so you focus, in the common case for client workstations should be on prevent remote access to shell services and remote execution of any code.
You are correct regarding 'secure shell' or 'ssh' as it's more commonly known. This does nothing to protect a system from SUID bugs nor from trojan horses. That's not its purpose. The purpose of ssh is to allow secure remote access --- which is very difficult to spoof, hijack, sniff, or otherwise compromise.
ssh is a cryptographically strong version of 'rsh' 'rlogin' and 'rcp'. It uses RSA public key cryptography to perform mutual host authentication, and to establish a one-time session key. It then uses IDEA or some similar (user/admin configurable) symmetrical key encryption to protect the contents of the session from sniffing. Since the potential attacker should not be able to properly encrypt any packets (no access to the session key) --- this also prevent the attacker from injecting any forged packets into the communications stream (a process referred to as "session hijacking").
There are a number of other encryption packages available for Linux. They operate over various protocols, serving different needs and providing different features and applications. For example SSL is a set of protocols that are most commonly used for securing web pages and communications between browsers and web servers (primarily submission of form's data to CGI scripts). SSL is used because it is commonly built into the most popular web browsers. There is a suite of other SSL applications such as ssltelnet and sslftp (these are client/server packages --- so your intended host sites must install the appropriate daemons before your clients will be able to use these protocols).
I did post a rather lengthy message on free crypto tools recently --- giving a pretty large list of the tools, though almost no "HOWTO" coverage of them. The idea was to provide lots of pointers to the web sites where more info on these tools (and the tools themselves) could be found.
Naturally, due to the continuing disgrace of U.S. federal government regulations --- which consititute an obvious and despicable subversion of our Bill of Rights --- we are unable to freely provide our crypto software to the world at large. So free nations elsewhere are required to provide these. (Please write to your congress critter to let them know that this is a major votiing issue for all software enthusiasts --- and follow up by endorsing candidates to recognize the freedom of speech extends to the expression of practical mathematics through the art of computer programming.)
I normally avoid politics in my column. However, this is one issue on which I cannot be silent. The sheer pettiness of these regulations (they didn't have the guts to pass them as laws --- they are "regulations" enacted without direct congressional action but clearly with plenty of underhanded political support) is astounding!
The notion that a computer program can be arbitrarily classified as a "munition" and thus fall under export control is a slippery slope. It's only a hare's breadth from the notion that these "munitions" should entail mandatory registration and "7 day waiting periods" and ultimately be banned entirely from domestic use. It'll all start with populist phrases like: "protect the children from child pornography" and "only drug dealers and mobsters have secrets to hide from us"
Anyway, back to your subject. Just commenting out three for four services is not enough. Start by commenting out everything. Then remove 'inetd' completely from your startup sequence. That's much more comprehensive.
However, you may find that you "need" some of those services. For example, if you do IRC you'll find that most IRC servers want to do an "auth" call back to the the 'identd' (identification) server on your system. You can use TCP Wrappers, and only re-enable a service (with restrictions that are as tight as feasible in your /etc/hosts.allow) when you know what it is doing and why you are enabling it.
That's why I'll be giving this talk. It isn't simple.
From liam on Thu, 31 Dec 1998
Dear Answerguy,
A Quickie: (please read this!)
WHERE THE HELL CAN I GET THE GRAPHICAL TERMINAL ETERM??? (The new replacement for rxvt, you know, the one that supports pixmaps ....not the terminal mode of emacs, great as that is). I can't find it anywhere, it's not in the sunsite or GNU ftp archives, it's mentioned in some HOWTOS, but with no reference as to how to obtain it. Is it part of commercial X distributions only or something?
yours confusedly, Liam.
As far as I know Eterm is the Enlightenment inspired xterm. The fastest way to find files like this these days is the Freshmeat QuickSearch feature. This lead me right to Eterm and its web home page at:
http://www.tcserv.com/Eterm
A Comment: (linux SUPERGRAN)
On a personal note, my familiy in London who know LESS THAN NOTHING about computers, got their first PC (assembled by me) for Christmas, and are all using a pleasent Linux/KDE/Netscape+Applixware combo which they aver they find much easier to use than "those funny computers at the university" (- i.e. basic win95+Novell/IE/MSOffice monstrosity). Obvoiusly I set it all up and do 100% of the sysadmin, but still even my GRAN uses it (with my sisters help!) for e-mail & browsing. They are quite pleased that it never crashes
On My Soapbox: (consign to /dev/null now if looks too long &or boring)
Great column! Nice to see someone with the patience to answer those 'naieve' (i.e. uninformed!) newbie questions of the general form "So what's this Linux all about, can I run it on a PC ..." e.t.c. A waste of time and annoyance to old-timer hacks and busy developers it may be, but if the OS community is to get the message accros to joe public as well as relative "techies" (sys-admins, businessmen, university students like myself...) in the rapidly accelerating battle for hearts and minds; it is vital that everyone makes an effort to encourage outsiders to give it a try. There is an hightened level in media attention in OS & Linux right now which will not neccesairily last forever, and an exciting window of opportunity with the rapid development of 'user-freindly' desktop environments such as KDE and GNOME. It is all too much to ask of one poor Answerguy! Indeed it is an issue that needs attention from the OS community with hopefully a more rounded systematic approach developed: the risk of inaction is that growth of Linux in the home/light use market does not come quickly enough, and home/light users get locked into a depressing windows 2000 (NT5) "development" cycle, (if windows 2000 actually gets off the ground by 2010 that is!).
Two years ago I myself was converted to the 'light side of the force'and became a newbie (perhaps I still am), and if it wasn't for an achademic UNIX familiarity, and a good freind who was my local guru and walked me through the first few weeks, I would not be e-mailing you now (although a lot has changed in two years). I have been pleased to spread Linux to four freinds since then, (walking them through their first install e.t.c), and a healthy informal Edinburgh LUG, has sprung up consisting mostly of home-users. The growth has been phenomenal as all the 'statistics' attest, but in the coming two years word-of-mouth will not be enough.
Glad you like it. Please feel free to do your part in the great tech support effort. Join a users group in your area. Help out at the occasional installfest. Jump into the newsgroups or onto the occasional mailing list to answer a few questions when you can.
There are still some rough spots for us to go through. However, I think that we'll make it. Linux currently enjoys about 2.5 percent of the desktop market according to one of the recent surveys. So that's our next goal. We tripled our penetration into the server market last year --- I think we can at least quadruple our share of the desktop (for a total of 10%). Talk to me after the Y2K dust settles in 12 months and we'll see if we made that goal.
From 4th Dimension Webmaster on Thu, 31 Dec 1998
Hi , i have a DUAL 400MHz Pentium 2 processor which runs 400+ processes. In kernel 2.0.x i had to increase max processes in tasks.h, and nr_files and nr_inodes in fs.h.
I tried kernel 2.1.131, it was much more efficient with the dual processors and everything ran more smooth, except one problem. there is no "nr_inodes" in fs.h. So when ever i hit around 400 processes , it was out of file descriptors and couldnt spawn any other processes. If you know how to over come this problem please let me know.
You should be able to just 'echo' the desired values into the proper nodes under the /proc filesystem.
Those would be something like:
/proc/sys/kernel/file-max
/proc/sys/kernel/file-nr
/proc/sys/kernel/inode-max
/proc/sys/kernel/inode-nr
... though I just snarfed those in while running a 2.0.x. I'll need to fetch a 2.1.132 and start a new round of tests on that kernel.
In any event --- the nodes should be under /proc somewhere -- and you can just use 'echo' with standard shell redirection to put new values into these at run-time.
Somewhere on the 'net there is a FAQ or HOWTO that describes this and gives sample values. I think the max inodes should be about 3 times the max open files. Anyway, take a look through the Kernel mailing list FAQ at:
http://www.tux.org/lkml.html
From chris smith on Wed, 30 Dec 1998
Jim: Thanks for your response
in checking out my system with the command ps I find that there is no pop deamon running so I guess i will have to find that.
in.popd (and most other POP daemons such as qpopper) wouldn't show up during 'ps' unless someone was accessing the service concurrently to your running the 'ps' command.
The whole point of 'inetd' is that it monitors all of the TCP/UDP ports (on all of your interfaces) and dynamically launches the services daemons (in.popd, in.ftpd, in.telnetd, etc) on demand.
So, check your /etc/inetd.conf --- and make sure that inetd is running. Then try to run a POP client.
Another trick is to use telnet to connect to the POP-3 port (110). You can then issue USER and PASS commands -- followed by a QUIT command. If those work then your POP daemon is responding.
As with most Unix TCP services, the control messages in the protocol are implemented as a set of short commands and standardized responses. This is the way that SMTP, FTP, POP, IMAP and several others work. (There are also services that use binary and null terminated strings for their protocol elements --- those generally can't be "spoofed" or "debugged" using just plain old 'telnet').
as for my comments about the dos\windows directory structures, let me clarify in dos\ windows when you go to a a folder for say Netscape, you will find all of the files(for the most part) to run that program under that folder and in directories directly under that folder ( excepting perhaps some common system .dll and autoexec.bat config.sys, and 3 or 4 other common system files,ignoring the system registry fro a while) It seems to me that the programs under linux are scattered all over the place. I understand that mostly all of the files are text based (makes sense to me for set up reasons), but why are they everywhere, and no one has been able to tell me just what the major directories mean (or represent) just why is stuff where it is?
First of all, "folders" are a completely different abstraction than "directories." Folders don't exist in MS-DOS. They are a Windows thing. (Terminology borrowed from the MacOS paradigm).
I think that you belief that Linux and Unix files are "scattered all over the place" (a complaint you've repeated twice now) is largely a matter of your perception. As you say, some DLL's, fonts, and other elements of Windows programs are put outside of the folders and directories that are associated with them.
In any event, Unix (and Linux) provide "mechanisms" --- they don't set "policy." So each programmer is free to use whatever conventions best suit their needs. Most Unix/Linux programmers follow a fairly complex set of conventions --- which have evolved over the course of about 30 years.
That's ten times longer than Windows '95 has been around, and twice as long as MS-DOS.
As for what the different directories "mean" --- read the FHS (filesystem hiearchy standard) which is part of the Linux Documentation Project.
It sounds like you spending more time fighting the conventions than understanding or accepting them. Some of them are a bit silly (/etc for configuration files, why isn't it /conf?) and some of the file names are historical (which is why we store user account names, shells, home directories, and other info in the /etc/passwd file --- and we store password hashes in the /etc/shadow file).
/usr is the home of "user space" programs and resources, while /var is the tree for /usr type files that are expected to differ between systems (things that used to be in /usr until people started trying to share /usr over NFS). /home is common on Linux and less common on other Unix platforms --- most of which use a set of fileystems like /u1, /u2, etc. /proc is a "virtual" filesystem --- a representation of the kernel's process status as a tree of nodes. This allows programs and shell scripts to access process status and other kernel data without requiring special interfaces into the kernel. The /dev directory is for "device nodes" (filenames through which programs can access and control devices).
It would take a rather lengthy book to go over all of these conventions. You could read "Linux Installation and Getting Started" for some of this. Most of it is more of an "oral" tradition (carried mostly by netnews, over mailing lists, in user group meetings and at technical conferences like USENIX, SANS, and the IETF workshops.
there must be a philosophy behind this system I don't understand yet can you shed a little light on this??
Read Peter Salus' "A Quarter Century of Unix" if you want to understand the background of Unix (and thereby the heritage of Linux). There is also another book whose title escapes me --- but it's something like: "the philosophy of Unix" --- which is more for programmers.
thanks chris
From Clay Harmon on Wed, 30 Dec 1998
I have just added an Intel Pentium Linux (Redhat 5.1) box to a heterogeneous network consisting of 2 Sun Solaris 2.5.1 workstations and 4 Win95 PCs. Everything has gone pretty much OK, only I can't establish an ftp connection from outside to my Linux box. If I try to ftp into the Linux box from the Sun stations, I get a "421 Service not available, remote server has closed connection" message. I have looked at the usual culprits, i.e. /etc/hosts.allow, and have enabled access to the ftp server for ALL. What is truly strange is that inetd "superdaemon" seems to work just fine for the finger, telnet AND rlogin services - I can access the Linux box from outside just fine using any of these, but the ftp server does not appear to be up. The only other piece of network weirdness I have noticed is that when the Linux station boots, I get an error on one of the Sysv init scripts:
Executing: /etc/rc.d/rc3.d/S10network reload
* route: netmask doesn't match route address * Usage: route [-nNvee] [-FC] [Address_families] List kernel routing tables
* ....... and so on and then
Executing: /etc/rc.d/rc3.d/S50inet restart
That probably is unrelated --- though you should check to make sure your routing tables are right. Are you running 'routed' or 'gated' to get your route dynamically?
The reasons that I don't believe this symptom is related to your FTP problem is that it's complaining about routing and you clearly are getting packets to and from the box (otherwise you wouldn't get the service unavailable message --- and finger/telnet and rlogin wouldn't work.
It also sounds like this probably isn't a TCP Wrappers problem --- since you presumably have all you services wrapped. However, you should check to make sure that your forward and reverse DNS zones are consistent --- since this classically can cause TCP wrappers to deny connections that would otherwise be allowed. (Normally tcpd is compiled with -DPARANOID enabled --- though Red Hat ships with it off, so you can explicitly use the PARANOID directive if you want -- but you don't get it unless you ask for it).
In any event it seems that the most likely case is that you have a problem in your inetd.conf file --- probably a path referring to non-existent in.ftpd. Did you install in.ftpd, WU ftpd or ProFTPd? You have to install some FTP daemon in order for the dispatche (inetd) to execute it.
So, make sure the package is installed. Make sure that the path listed in the /etc/inetd.conf is correct. Finally, look in /var/log/messages for any errors that inetd, tcpd, and/or in.ftpd (or its ilk) are reporting.
If all of that is O.K and things still don't work --- I'd look for something weird with one of the routers (some sort of packet filtering, network address translations or IP masquerading or something like that).
Incidentally, you mentioned "from outside" --- I hope you don't mean that your organization is allowing direct routable IP from the outside world (open Internet) all the way into your desktop workstations. If that's the case I'd highly reoommend a review of your security policies and an assets evaluation and risk assessment.
Your company can provide reasonably safe and secure remote access to it's employees without leaving itself wide open to every cracker that want another attack launch point and portscanning slave.
This may or may not be related to my problem.
I'm stumped. Everything else seems to work just fine - I can get out through our ISDN router to the net, Netscape works fine, and all of the other services seem to work just fine. I can use the ftp utility to access the Sun stations, and "get" files, but I would really like to be able to ftp from our PC's into the Linux box, without having to go through the complicated path of ftp'ing from PC to Solaris(put) and then from Linux to Solaris(get) to just transfer a simple file. I don't have the option currently of ftp'ing from Linux to PC, because Win95 does not have an ftp server as a standard option, so I would like to be able to ftp from PC to the Linux (put). I have the feeling there is something simple that I'm doing or not doing that would fix this problem.
Thanks for your help
Look for your ftpd program. There are several to choose from. I think Red Hat 5.1 uses 'in.ftpd' as re-ported from the OpenBSD sources. Most Linux distributions default to the Washington University (St. Louis) WU-FTPD. I've recommended others (such as ProFTPD, BeroFTPD, and ncftpd) in previous columns.
From chris smith on Tue, 29 Dec 1998
James:
I have been going over all the back issues of the Linux gazette (and many books and articals) looking for info on setting up a Linux(5.1) machine as an ISP to serve e-mail to customers.
In a test sceneraio I hava created new accounts with passwords and sent them e-mail from an outside( through another ISP), but trying to find the info on how to retrive the e-mail is very difficult. My intent was to use POP3, and aparentaly I have to configure inetd.conf to run POP3 and allow others access to ther accounts.
On most distributions POP and various servers are enabled by default. Normally it's wise edit /etc/inetd.conf to disable POP and other services.
When you created these accounts --- one thing you should probably do is disable user access to shell (login) services by setting their shell to /bin/false.
Actually there is a problem with that, too. It gets a bit complicated by the fact that /bin/false on many Linux and other Unix systems is actually a shell script. You'd think that a shell script that does an immediate exit would be safe enough. However, 'telnetd' and some other services will propagate certain types of environment variables to the login shell. It's possible (using some shell quoting hackery) to trick /bin/false (the shell script) into executing arbitrary chunks of shell code if they aren't filtered by the telnet daemon.
So, you should compile your own binary equivalent of false --- actually I wrote my own I call "denysh" as shown here:
#include <unistd.h> /* denysh * by: James T. Dennis, <jim@starshine.org> * Proprietor, Starshine Technical Services * * Deny a user shell access. Intended for use as * the "shell" for POP mail, FTP only and other users * who are supposed to be restricted to non interactive * use of the system. * * Usage: using vipw you can replace the "shell" field * of any user's account record in the /etc/passwd with * the full path to this binary. You can also add this * to /etc/shells and (as root) use the chsh command to * apply this (no need to edit /etc/passwd if that bothers * you). * * compile with: * gcc -static -o denysh denysh.c * * to prevent any chance for shared library (LD_PRELOAD) * exploits */ int main () { char *message= "Access Denied: Your account is not" " permitted interactive login!\n"; write (STDERR_FILENO, message, strlen(message)); exit(1); }
... just compile that and read the comments.
I also recommend setting the home directories of "POP Only" users to some directory that they don't own, to which they do not have any other access, and also denying them FTP access.
Of course if your customers have special needs --- for example they intend to run 'procmail' on your server, etc --- then you'll need to review your policies and make your own decisions.
Of course, most sites don't secure their systems all that well. So many sites will continue to use the /bin/false, and they'll occasionally see their "POP Only" users (or people who've sniffed or stolen the passwords for their users) subvert the "/bin/false" into full interactive shell access.
Of course if your system is using PAM there are ways that you can limit specific users and groups to specific services (particularly using the 'listfile' module. PAM is the default authentication model for Red Hat Inc's distribution --- and it can be installed on other systems as you like. It's also possible to limit access to services based on where the request is coming from. Thus it's pretty easy to institute a policy that allows 'telnet' and other forms of access from your local LAN while denying it to anyone whose request is coming from an "outside" system.
If your going to run an ISP system you'll want to learn quite a bit more about Linux security than the average sysadmin.
(Shamless plug: I'll be giving a tutorial on the subject at the upcoming LinuxWorld Expo: http://www.linuxexpo.com).
any Help that you can give will be much apriciated.
chris
ps. I got handed this job under protest saying I am willing to learn ( I come from the land of windows and dos where everything is in one directory not scattered around {what is up with that anyway} ), and I am reading everything that I can, but there are still many many holes, the local groups are some help, but the continued refference to read the man pages helps little. I hardly under stand what they are saying 1/2 the time. just venting i guess
I've never seen an MS-DOS or Windows system where "everything is in one directory" --- even if you consider the Win '9x "Registry" --- that is more of a "virtual file system" than a "single file" (since it has many "sub trees" and "nodes").
Indeed, you'd find (if you'd studied any operating systems beyond MS-DOS, Windows, and Unix) that the similarities between MS-DOS and Unix are somewhat greater than their differences.
However, the Unix, and consequently the Linux, convention is to use relatively simple text files for configuration of almost all services. System services are almost all controlled by files under the /etc directory tree.
The use of text files allows for easy repair, auditing and relatively easy automation of changes (since awk, Perl and other text processing scripts can be written to modify many settings on systems across a network. It's also possible to distribute new configuration files (including passwd and group files to update user account information) over the net. This is facilitated by having separate files for different services.
"in the deep end and over my head comming up for air soon I hope"
Well, one approach would be to just "go with the flow" --- just enable the POP daemon support in inetd and let the users access whatever other services they like.
Professionally your best bet is to recommend that a consultant be placed on retainer to help you set up each new service as requested. That consultant should review your needs, show you how to install/configure the service and give you some pointers on maintaining it. It would be a good idea to have that consultant --- or better yet, a different one --- come in to do periodic systems administration and security audits.
In this way you get the help you need, the services installed and configured by someone whose done it before, some training, and a direction to which you can escalate emergencies.
If your boss expects you to "just do it" and expects it to all get done right and in a timely fashion, and refuses to provide you with the additional resources (consultants, training, time, leeway to mess things up, whatever) then you should definitely consider your negotiating position.
(Many employers exhibit unreasonable expectations in this field. They've fallen victim to the lies of software company marketeers that have been chanting "ease of use" for the last two decades. A lot of software is only "easy to use" if you want to do it "their way" and accept whatever limitations and flaws --- particularly security flaws --- it shipped with. However, many of these managers will listen to reason --- and the really important part of a sysadmin's job is to manage the expectations of his or her users and management).
The original thread appeared in Issue 36, "'rsh' as 'root' Denied".
From Walt Smith on Tue, 29 Dec 1998
HI !
THX for the reply......
Unfortunately, I still can't -
as root. Tried it on slackware nicely setup w/ 2.0.30
kernel. Didn't try Red as I don't know it as well.
rsh wally ls
I changed the /etc/inetd.conf to read -h
starts with -
shell stream tcp nowait root /usr/sbin/tcpd in.rshd -h
I also tried -hl and -l
/etc/services has:
shell 514/tcp cmd #no passwords used
(thats the actual statement including # comment above)
I had hosts.equiv text of -
(I took hosts ISP bcpl.net and added 'wally' for my pc.)
(wally is aliased for same in file hosts)
wally.bcpl.net +
MESSAGE given is -
permission denied
I also tried renaming hosts.equiv to get it out of the loop entirely.
Your /etc/hosts.equiv seems to be in the wrong format. Your hosts.equiv should contain hostnames --- no "+" (plus) signs or any other data. Some versions don't seem to allow IP addresses -- just hostnames.
I personally recommend that you configure such a system to give /etc/hosts files priority over DNS --- and distribute a good hosts file to all of the systems on this cluster.
Running it with the -l (disable personal .rhosts files) is probably a good idea for a cluster. I'd definitely put this cluster behind a router (any Linux box with a couple of interfaces will do) and configuring a set of packet filters to limit outside access to services within the cluster.
The very least you should do with your packet filters is "anti-spoofing" --- let's say your using the 192.168.10.* block of addresses (from RFC1918) for your cluster nodes. You'd put in a rule like this:
ipfwadm -I -o -a deny -W $exterior_interface \ -S 192.168.10.0/24
... (as one-line, of course) to add (-a) a "firewall" (packet filter) rule to the "incoming" (-I) table on the interface which (-W) you've named which will "deny" any packet that purports to have a source (-S) address that's supposed to be assigned to one of your internal cluster nodes. The -o in this rules specifies that any packets matching the rule ("caught by it") should generate "output" to the syslogs. You can then filter/monitor your syslog for attempts to violate your policy.
This affords only a tiny measure of protection over all. However, it is better than nothing. If a group of machines will have a trust relationship based on their IP addresses --- you much ensure that your routers into that LAN segment won't blithely allow "imposter" packets through.
By the way, bcpl.net is Baltimore County Public Library. Their accounts are $100/year unlimited time, with ppp, telnet to sun shell $, ftp, and 5 megs for email/and/or web page !! Such a deal !!!
see www.bcpl.net/~waltech/ if curious, which I doubt....
I'll leave in the plug. Normally I filter out identifying information from messages before posting them to the Linux Gazette. This is to protect your privacy (and limit the amount of spam that would be sent to my correspondents).
Never programmed in bcpl .... thats a golden oldie, right ??
Yes, it pre-dated B which was the predecessor to C. Some have argued that the next programming language in the evolution of this family should therefore be "P" --- then "L"
I want to use rsh because I want to get a small experimental Beowulf going, and this tidbit is neglected everywhere I've checked. Did I muck something ????????????????
It looks to me like you put extra stuff on your hosts.equiv lines. A "+" on a line by itself would be a "wildcard" allowing in "all" hosts (which is every bit as stupid as it sounds --- and was the default for SunOS and Solaris for many years)!
I think the versions of in.rshd and the related daemons that are commonly shipped with Linux (different versions for different distributions --- most are BSD or Wietse Venama 'logdaemon' based) will ignore such wildcards.
THX for any help !
regards,
Walt Smith
From ehalm on Mon, 28 Dec 1998
Hi, Looking for ways to get my mail from my POP3 account on my ISP and deliver it locally.
Thanks, Ebow Halm
In your subject you list 'procmail' --- that is probably not the right tool for this job.
The normal way to get your mail from your ISP (or any POP server) to your system is to use a mail user agent such as Netscape Communicator that directly uses this protocol.
However, there's another way that's useful if you use 'elm' or 'pine' (or MH as I do). You can use any of several programs that fetch the mail from a remote POP or IMAP server and store it in your "inbox" (usually something like /var/spool/mail/$USERNAME). Currently Eric S. Raymond's 'fetchmail' is the most popular utility for this purpose. There are others with names like 'getpop' and 'popmail' --- some are simple PERL scripts.
One minor complaint I have about 'fetchmail' is that it really wants to relay the mail it fetches through the local mail daemon (usually 'sendmail') --- so that it can apply any local aliasing and filtering rules to it.
Since I like to centralize my mail on one server --- and prevent mail daemons from running on the client workstations and other servers on my LANs --- I need to bypass this.
The easiest way is to invoke 'fetchmail' with some extra parameters to force it to pipe the messages through my preferred delivery agent (procmail). So I use a command like:
fetchmail -m "/usr/bin/procmail -f - "
... note: this is only appropriate for fetching mail for a single user. Some ISP's will spool mail for an entire client domain into a single "mbox" file (this is one method of "virtual hosting" mail). They expect the client to split the mail back into the users within that domain to whom it is addressed.
ISP's that want to do this correctly will add an additional header to each incoming message --- usually called "X-Envelope-To:" One way to do this is documented at:
http://www.sendmail.org/faq/section3.html#3.29
... in the ' sendmail 'FAQ (it uses procmail).
I've seen references to another method that just uses a line like:
H?P?X-Envelope-To: $u
... or
H?P?X-Envelope-To: $g
... to your sendmail.cf file (near the top) --- or to your .mc file where it will be passed into your .cf file by m4.
There's a whole section on these "multidrop mailboxes" in the 'fetchmail' man pages.
Insteat of using the fetchmail -m (MDA) option I've also occasionally resorted to a different technique --- where I define a line in my /etc/inetd.conf like:
smtp stream tcp nowait root /usr/sbin/tcpd /usr/sbin/sendmail -bs
... and lines in /etc/hosts.allow and /etc/hosts.deny like:
# hosts.allow smtp: 127.0.0.1
... and:
# hosts.deny ALL: ALL
... or at least:
# hosts.deny smtp: ALL
This allows me to configure sendmail (or another SMTP daemon) to be dynamically loaded --- but only for connections by the "localhost" (throught the loopback interface). The main reason I use this is that some of the MUA's (mail user agents) seem to wont to deliver mail to the local SMTP daemon as well. In particular the mail sending utility in MH seems to demand it.
Granted, most people are somewhat sloppier about their system configuration. They let 'sendmail' (or 'qmail' or some other SMTP daemon) just run on all of their Unix systems --- including workstations that only ever have a single user logged into them. I think it's a bad idea --- unnecessary and possibly a security risk.
('sendmail' has improved immensely over the last couple of years --- but that doesn't mean we should for get that it was a favorite target of crackers for over twenty years --- and that we should assume that some new package like 'qmail' or Wietse Venema's new PostFix doesn't have some, as yet undiscovered bug).
Incidentally --- another, more hackish, way of getting your mail would be to have some script that ftp'd or otherwise copied your remote "mbox" (inbox) file to your system (performing the necessary locking!) and then fed it through the 'procmail -f' command to process it accoding to your filters (and feed the resulting messages into your local mbox/inbox or other folders).
One advantage of 'fetchmail' is that is supports a wide variety of advanced authentication options. For more info on 'fetchmail' go to ESR's web page for it:
http://www.ccil.org/~esr/fetchmail
From Stephen P. Smith on Mon, 28 Dec 1998
Is there a linux program(s) to would be equivelent to the msd.exe program (in the dos/windows world).
I would like to know that interrupts, dma ranges, etc. my system is using so that I can add another ethernet card to my system. I currently have a 3Com 509B ISA card in the chasis and want to install a second ethernet card.
Can you point me to an article, how-to, or FAQ. I have done some searches and can't come up with anything.
Stephen Smith
Quite a bit of that information is available from the output of the 'dmesg' (dump boot-time kernel messages) command, and from virtual files under the /proc directory.
Most of the info under /proc can be gained using common shell commands, 'ls' and 'less' or 'cat' Some it is summarized using the 'procinfo' command.
It's also possible to get additional info using the 'lsdev' command, the 'scanpci' command, and utilities from the ISAPNP (plug & play for the ISA bus), PCIUtils and PCMCIA packages. You can use 'SuperProbe' for video cards.
Obviously there isn't a single, integrated and easy menu driven interface for this information. I'd love to see Quarterdeck and Symantec collaborate and put together a combined Manifest (TM) and NDiags (TM) for Linux. I personally think that these were the best utilities for DOS in their class (although "System Sleuth" was pretty good, too).
Some of the availability of this info is dependent on how your kernel is configured. It's possible to compile a stripped down Linux kernel (which can be very compact very fast and somewhat more secure than a larger or more modular one). Such a kernel may not recognize many of the devices that you have installed, and Linux will generally leave anything it doesn't recognize completely alone.
Generally, it is best to learn about your hardware from the documentation provided with it. Naturally I don't practice this as I'd like --- my systems are mostly hobbled together from spare parts. Unfortunately most systems that most of us purchase are woefully under-documented. The PC industry churns through component designs and chipsets so fast and furiously that most manufacturers can't keep track of what they're using from on day to the next. It's a sad and unnecessary state of affairs --- the naturally result of too much competition and commoditization.
(However, without that competition and commoditization we'd all still be paying $5,000 US for XT's --- so I can't complain too much.)
Incidentally the 'ifconfig' command should tell you which IRQ and I/O base your current card is using. If it's using IRQ 10 and I/O base 0x300 (the default for most 3Com cards) you can usually put the next one at IRQ 11, I/O base 0x280 or 0x320. It's pretty easy to run out of IRQ's on PC's. You can sometimes disable your printer ports to grab IRQs 5 and 7 --- and sometimes (especially on servers) you can nix the PS/2 mouse port to reclaim IRQ 12, and/or one or both of your serial ports to get back 3 and 4. That gives you a total of seven that you can distribute among SCSI and ethernet cards in a big server. If you can take out both IDE channels you might get back 14 and 15. Some systems will let you use 9 and 13. As for I/O address spaces. Those usually aren't too crowded.
From farquhar on Sun, 27 Dec 1998
I'm a new Linux user and I've found your column (and The Linux Gazette) immensely helpful. Thanks.
Here's a question I haven't found an answer to, however. Thanks to The Linux Gazette, I know it's possible to connect 386/486 PCs to a LAN containing newer PCs and run them as X Terminals. I also know it's possible to set up text-mode terminals via null-modem using getty. But is there any way to run an X Terminal off another PC via a null-modem link? (This would be great for two-node LANs like you might find in many homes -- a null-modem cable is much less expensive than two NICs and a cable to connect them.)
Thanks. Dave Farquhar
Is is certainly possible to do this. You have to run a PPP or SLIP (some sort of TCP/IP networking connection) over the serial line to do it.
However I'll warn that X Windows on a typical 386 or 486 --- especially over a serial line --- would be essentially unusable.
Actually the quality of your video card matters a little more than the CPU. My 386/33 running X on a 2Mb STB Powergraph is more usable than an old 486DX2/66 that my father used to use with a cheap 1Mb or 512K VESA VLB video card. However, neither of them was acceptable --- even when running the apps remotely (the server still has to work locally).
So I wouldn't do it except at the lower resolutions (640x480 and 800x600). X is simply not tolerable at those resolutions. Of course MS Windows was pretty useless on those old boxes too.
Anyway --- look at the PPP HOWTO and see if you can get your TCP/IP running over the null modem. Then running X over that should be just like running it over any other network connection.
From Vic Ward on Sun, 27 Dec 1998
Where can I find a download site to download the free copy of Microsoft Office for Linux?
peace
vic ward
Whoa! Dude!
It's not April for a couple more months. Save the "Fool's Day" messages until late March!
If you mean "Where can I download a suite of Linux applications like MS Office" that's a different question.
So far the closest Linux analogs to MS Office are commercial packages:
- StarOffice
- http://www.stardivision.com
- Applixware
- http://www.applix.com/appware
You can download the "Personal Edition" of StarOffice for Linux by pointing your web browser at http://www.stardivision.com/office/so5linux_body.html. This appears to be free for personal, non-commercial use. Be prepared! This is a 70Mb download. The tar file isn't compressed, though most of the contents apparently are.
There are also some produtivity applications which aren't presented as "suites." Corel's WordPerfect is a recent example of a commercial application (word processor, surprisingly enough) that has been ported to Linux. Actually there have been versions of WordPerfect for Linux for several years --- originally it was sold exclusively through Caldera. However, recently version 8.x was ported and released to Linux. This also seems to be free for "personal" use --- or at least it's available as a free evaluation download. This one is only 25Mb and can be had at;
http://news.freshmeat.net/1998/12/17/#913937580
(Freshmeat lists three different download sites: download.com, cdrom.com, and surfnet.nl).
Getting just a word processor is probably not enough so it makes sense to get a spreadsheet, too. There are several of these available. For someone who likes Excel (as you presumably must, since you're asking for MS Office for Linux) you might try Wingz. This has also recently been updated to version 3.11 (?). You can find that at MetaLab (formerly known as Sunsite.unc.edu -- Univ. of North Carolina's premier Linux archive site):
http://metalab.unc.edu/pub/Linux/apps/financial/spreadsheet/
... or you could get XessLite from a different directory at MetaLab:
http://metalab.unc.edu/pub/Linux/apps/office
From what I gather the latest versions of Wingz and Wingz Professional are under a more liberal license than the previous Linux version (which was shareware for about $50 US, if I recall correctly).
Naturally you'll want to read the licenses for each of these packages to glean details about your responsibilities before you use them.
To get something like "PowerPoint" you could look at 'MagicPoint' (http://www.mew.org/mgp) (from Japan --- MEW is a MIME mailreader for emacs/xemacs, MagicPoint is a separate application). This seems to be under a BSD or GPL license. I was able to get it up and running pretty quickly and it looks like a very promising package. (The presentation files are simple text --- and the effects are layed over them. You just write your presentation in a simple outline format, and slide styles are applied according to your indentation level).
For free applications you have to dig a bit deeper. There's the ongoing LyX project (to create a GUI front-end to LaTeX), and the Hungry Programmers (of LessTif fame) are working on GWP (GNOME Word Processor). From what I read the Mexican national educational infrastructure will be investing in GWP, Gnumeric (?) and a few other strategic projects as part of their initiative to put Linux unto about 1 million computers at 140,000 sites!
SIAG (Scheme in a Grid) seems to be getting more mention recently, as are Maxwell, PAPyRUS (?) and several others.
Generally you can look for Linux applications at several places. My favorites are Christopher B. Browne's web site at http://www.hex.net/~cbbrowne and the canonical Linux Applications Pages at: http://www.linuxapps.com
Also Linux File Watcher (http://www.filewatcher.org) does some decent categorization and organization.
From rdefoe@reichert.com on Fri, 08 Jan 1999
I am trying to use diald to connect to compuserve. Compuserve requires that the port settings be set at 7 bits - Even Parity before the login and then set back to 8 bits with no parity after sending the password.
I can't seem to find a way to do this with chat. What am I missing?
Terminal line settings are normally controlled with 'stty' using commands like:
stty -parodd parenb cs7 < /dev/ttyS?
... not the redirection from the modem serial device node not to it. That's a quirk of this command; it works by issuing ioctl()'s on it's input file descriptor.
Just offhand I don't know how to invoke stty during 'chat' --- it might not be possible. In the worst case you might have to hack together your own version of 'chat' to add a command or two (which could then invoke the appropriate 'stty' command through a system() call --- or could incorporate some of the 'stty' sources directly.
I'm not enough of a programmer to do this in a reasonable time --- but it's a possibility.
Did you do a Yahoo! and Alta Vista search on the phrase "+Linux +CompuServe"? Is anyone else using Linux to access their CompuServe accounts?
From Laurin Killian on Fri, 08 Jan 1999
James,
I enjoy your column, but sometimes you seem to stop short of the "real" answer. What I mean is, you don't answer the specific question that is being asked. This is good in some ways - because I've picked up some interesting ideas from your more general answers.
Case in point, Answer Guy 36, "How to "get into" an Linux system from a Microsoft client": The guy says he can "get into" - use SMB to view files on his linux box - from win95 and NOT WinNT(sp4). The big issue is the difference, he can see his files with win95, NOT with NT. As of sp4, NT uses encrypted passwords by default for shares and will not view files from a share that does not use encrypted passwords.
Yes. I remembered that. However I didn't remember the details so I wanted to refer him to the FAQ.
There are two options that are detailed in an article in Linux Journal
#56 and also in the documentation for samba, in the files:
Basically, turn off encryption on NT, or turn on encryption on the samba
box.
ENCRYPTION.txt, WinNT.txt, NT4_PlainPassword.reg
The easier, of course, is to turn off encryption on the NT box, but to show interoperability with NT, it is a good idea to actually turn on (password) encryption on the samba server.
Actually it seems harder to disable the encryption on the NT box (or boxes) since you have to do it on every one of them by hand in their weird registry editor.
Enabling the encryption support on the Linux box is a one time hassle (per server) can can conceivably be automated. It would be nice if we could make it the default --- but those pesky U.S. crypto export regulations are probably chilling that idea.
What my question really boils down to is this: Should I email you more detailed answers when I know them, since I don't seem to have the email of the person who asked the question?
-Laurin
You're welcome to relay the more specific answers through me. In this case I think the original poster did get the right info from the FAQ. (Usually I get a follow up if my answer didn't quite do it).
Although I avoid saying "RTFM" to any question --- I will sometimes "cop out" and point at a specific FM to R. Sometimes that has more to do with my mood and schedule than with any rationality and the value of the question at hand.
From Steven Hancock on Fri, 08 Jan 1999
Hi, here's another solution to the problem of converting postscript files to gif. Get the ImageMagick package, if it isn't already installed, and use its mogrify command, like this:
mogrify -format gif somefile.ps
and this will create somefile.gif. The man pages on mogrify and convert for more information.
I figured there was something like this out there.
Here's a few of other commends that are part of my copy of ImageMagick:
/usr/X11R6/bin/animate /usr/X11R6/bin/combine /usr/X11R6/bin/convert /usr/X11R6/bin/display /usr/X11R6/bin/identify /usr/X11R6/bin/import /usr/X11R6/bin/montage
(I cut this list from a much longer list that's generated with the command 'rpm -ql imagemag' on one of my S.u.S.E. boxes. A similar command would work on any RPM based distribution).
One utility of special note is:
/usr/X11R6/bin/xtp
... which is actually a command-line FTP client for getting and putting files from/to FTP servers.
(There are several other FTP client utilities that can be operated non-interactive by command line invocation --- so it seems like a duplication of effort to have ImageMagick include one. Presumably the author couldn't find one of those at the time that he needed this).
These are the sorts of things I like to see in the Tips HOWTO and the "2-cent Tips" columns in LG.
Also I'd love for someone to put together an overview of Linux graphics software with some ideas about how to use xfig, tgif, ImageMagick, xv, the GIMP etc. Not something as sophisticated as the Graphics Muse --- but simpler things for those of use that just need to whip up some web page icons or draw diagrams and charts for the occasional project at work.
Obviously I'm ill-suited to this task since I'm an avowed text mode bigot.
From Faber Fedor on Thu, 07 Jan 1999
Great article. I'm in the middle of teaching a TCP/IP class and would have loved to use your article the past two days when we were going over subnetting.
May I have your permission to make copies and pass the article out to my class?
All of my columns in the Linux Gazette are covered under the LDP variant of the GPL. That does allow for free distribution and use.
You are welcome to use it however you like. Leaving my name associated with it would be appreciated. Then people know who to blame .
I'll be using a (hopefully improved) version of this article in my book.
Note: Please also look for the article on "proxyarp" --- this is a related subject that your students should also understand. Some of those concepts actually support the subnetting and routing discussion by providing a contrast and comparison. (As in: "Here's another way it can be done.")
and now, for my question: you referenced RFC1918 and "private network addresses". I know about them, I follow them, etc. but only because they are an RFC. I mentioned private network addresses to a buddy of mine and he brought up the point of "Why bother? With proxies, etc., you can have any address(es) you want, so it doesn't matter which address(es) you choose." I can't think of a reason to refute him.
So, is there a reason for choosing 192.168.x.x as opposed to using the Post Office's 56.*.*.* for my internal network that no one ever sees? (Yes, I know they're different classes; that's irrelevant .
By an odd coincidence I've done some consulting for the USPS so I am familiar with the fact that they use proxying to "hide" their 56.*.*.* network from the rest of the world. I suspect that about half of the class A addresses that have been delegated are similarly sequestered.
It would be nice if these organizations returned their IP addresses (exchanged them for smaller address blocks to accomodate their publicly accessible services, routers and proxy hosts). In the case of the USPS there are several Class C addresses that are used by the organization for their web sites et al.
However, the reason for the RFC is to prevent routing ambiguities. If the USPS decided to use some of their 56.* addresses for their websites, routers, etc --- and you needed to access those --- your router wouldn't have any way to know where to send these packets.
Of course, if everyone uses the same RFC1918 addresses and we start trying to connect to one another over VPN's then we have to do some weird "bi-directional" masquerading and NAT (network address translation) to turn your 10.*.*.* addresses into my 10.*.*.* addresses and vice versa. (This is not merely a theoretical problem --- a frient of my, has mentioned that he needs to employ these techniques now).
So, the short answer is: you can do it --- but you'll probably get bitten. There's no guarantee that the organization who's "hidden" addresses you try to use will continue to keep those addresses "hidden". It shouldn't ever concern any other hosts beyond your masquerading/NAT routers and proxy gateways --- so long as you don't "leak" packets with these bogus source addresses.
This sort of "leakage" is probably the most obvious reason to use the RFC1918 addresses. Any router on the net can be configure to drop those packets when any of use accidentally allow them to leak. This is good for the whole Internet.
Hope that helps.
TIA! Faber
From John L Capell on Thu, 07 Jan 1999
After pouring over the various resources on the best way to partition my system for RedHat Linux 5.2, I think I've come up with the following: (comments please, before I commit)
> Mount Point Part. # Size (Megs) > ================================================== > / hda1 350
I usually use one third that.
> /usr hda5 2048 > /home hda6 1536
I'd make this bigger. On a personal workstation I make /home a symlink to /usr/local/home and /opt one that points to /usr/local/opt ... then I combine those into one larger fs. Thus all my "local" changes and "my" files end up under /usr/local Obviously that's just a matter of personal taste.
> /usr/local hda7 1024 > /var hda8 300 > /tmp hda9 300
I also make this somewhat smaller.
> /usr/src hda10 300
I make this a symlink to /usr/local/src.
> <swap> hda11 127
This is fine. I usually make it the second partition.
Ideally this would be located in the center of the drive's platter --- reducing the average seek time to it. However, that's hackish and probably not worth the effort. (If your actually swapping -- add more RAM).
While I realize that I may have over-allocated space for programs,
leaving only (only!) 1.5Gb for users, I figure I could always add more space for users with a second hard drive if I needed to.
As you see its mostly a matter of requirements analysis --- which classically consists of three considerations: requirements, constraints and preferences. Given the size of the average hard drive sold today (4 to 6 Gb) we have lots of room (and are thus not overly constrained) and the fact that we an use symlinks for most FHS specified directories (/home, /opt, /usr/src, etc --- just don't do that with /tmp, /dev, /etc/, /sbin etc). --- it is mostly a matter of preference.
The resources I've used are:
(1) The RH 5.2 Installation Manual
(2) The Linux Documentation Project (http://metalab.unc.edu/LDP/)
(3) The Filesystem Hierarchy Standard
(http://www.pathname.com/fhs/2.0/fhs-toc.html)
Good work!
Where (if anywhere) am I straying from efficient disk usage? Thanks!
I think you're devoting a tad too much for /, /tmp and could consolidate some of your filesystems.
If you have reasons for keeping /opt, /home, and /usr/local separate then do so by all means. However, if you don't --- just combine them into one larger fs for maximum flexibility. If you're concerned about 'fsck' time (which grows much longer for larger fs' then I can understand splitting them). However, Linux systems are generally so stable that the fsck time on a workstation is not a major consideration (periodic reboots with forced fsck runs can lessen the chance that this will be required at inopportune times).
From sipior on Tue, 05 Jan 1999
Greetings, Mr. Dennis!
Having taken my computer home with me for a couple of weeks, so that I might not be Quake-deprived for the Christmas season, I found myself setting up a PPP connection with a local ISP. I was able to manually effect a PPP connection with little difficulty at all---however, I have been unable to automate the dialup process with ppp-on and ppp-on-dialer scripts (as detailed in the PPP-HOWTO). After tailoring these scripts to my particular setup, I was able to connect well enough, only to have the modem automatically hang up immediately! The relevant portion of my system log (sanitised for our mutual protection follows:
Jan 2 18:17:56 sarnath kernel: PPP: version 2.2.0 (dynamic channel allocation) Jan 2 18:17:56 sarnath kernel: PPP Dynamic channel allocation code copyright 1995 Caldera, Inc. Jan 2 18:17:56 sarnath kernel: PPP line discipline registered. Jan 2 18:17:56 sarnath kernel: Serial driver version 4.13 with no serial options enabled Jan 2 18:17:56 sarnath kernel: tty00 at 0x03f8 (irq = 4) is a 16550A Jan 2 18:17:56 sarnath kernel: tty01 at 0x02f8 (irq = 3) is a 16550A Jan 2 18:17:56 sarnath kernel: registered device ppp0 Jan 2 18:17:56 sarnath pppd[599]: pppd 2.3.3 started by root, uid 0 Jan 2 18:17:57 sarnath chat[604]: timeout set to 3 seconds
This timeout might be a tad shorter than you'd like. Try 15 seconds or so.
Jan 2 18:17:57 sarnath chat[604]: ATH0^M^M Jan 2 18:17:57 sarnath chat[604]: OK Jan 2 18:17:57 sarnath chat[604]: -- got it Jan 2 18:17:57 sarnath chat[604]: send (ATDTXXXXXXX^M) Jan 2 18:17:58 sarnath chat[604]: expect (CONNECT) Jan 2 18:17:58 sarnath chat[604]: ^M Jan 2 18:18:16 sarnath chat[604]: ATDTXXXXXXX^M^M
... your forgot to sanitize your local number from these logs. I've done it here.
Jan 2 18:18:16 sarnath chat[604]: CONNECT Jan 2 18:18:16 sarnath chat[604]: -- got it Jan 2 18:18:16 sarnath chat[604]: send (^M) Jan 2 18:18:16 sarnath chat[604]: expect (ost:) Jan 2 18:18:16 sarnath chat[604]: 38400^M Jan 2 18:18:18 sarnath chat[604]: - Blue Moon K56flex -^M Jan 2 18:18:18 sarnath chat[604]: ^M Jan 2 18:18:18 sarnath chat[604]: Select HOST:^M Jan 2 18:18:18 sarnath chat[604]: ^M Jan 2 18:18:18 sarnath chat[604]: ppp^M Jan 2 18:18:18 sarnath chat[604]: shell^M Jan 2 18:18:18 sarnath chat[604]: bbs^M Jan 2 18:18:18 sarnath chat[604]: ^M Jan 2 18:18:18 sarnath chat[604]: Type new to register for net access.^M Jan 2 18:18:18 sarnath chat[604]: ^M Jan 2 18:18:18 sarnath chat[604]: host: Jan 2 18:18:18 sarnath chat[604]: -- got it Jan 2 18:18:18 sarnath chat[604]: send (ppp^M) Jan 2 18:18:18 sarnath chat[604]: expect (ogin:) Jan 2 18:18:18 sarnath chat[604]: ^M Jan 2 18:18:18 sarnath chat[604]: host: ppp^M Jan 2 18:18:18 sarnath chat[604]: login: Jan 2 18:18:18 sarnath chat[604]: -- got it Jan 2 18:18:18 sarnath chat[604]: send (xxxxxxx^M) Jan 2 18:18:18 sarnath chat[604]: expect (assword:) Jan 2 18:18:18 sarnath chat[604]: xxxxxxx^M Jan 2 18:18:18 sarnath chat[604]: Password: Jan 2 18:18:18 sarnath chat[604]: -- got it Jan 2 18:18:18 sarnath chat[604]: send (********^M) Jan 2 18:18:18 sarnath pppd[599]: Serial connection established. Jan 2 18:18:19 sarnath pppd[599]: Using interface ppp0 Jan 2 18:18:19 sarnath pppd[599]: Connect: ppp0 <--> /dev/ttyS1 Jan 2 18:18:23 sarnath pppd[599]: Modem hangup Jan 2 18:18:23 sarnath pppd[599]: Connection terminated. Jan 2 18:18:24 sarnath pppd[599]: Exit. Jan 2 18:19:56 sarnath kernel: PPP: ppp line discipline successfully unregistered
Sorry for the long excerpt, by the way---if I had a better idea of where the trouble was, I could perhaps have quoted fewer lines...
What I find perplexing is that the modem hangup comes directly after the connection is established, but with no IP number yet assigned. I have also attached my /etc/ppp/options, /etc/ppp/scripts/ppp-on, and /etc/ppp/scripts/ppp-on-dialer files. These all come with the RedHat 5.0 distribution, obviously edited for my circumstances.
Ultimately, I guess my question is: "What am I missing?" Connecting manually is not exactly a Brobdingnagian task, but it does keep me from using diald, along with some other clever script-driven ppp utilities. I have been up and down the PPP-HOWTO, along with other /usr/doc/ppp files, and cannot effect a solution. I assume what I am missing is terribly obvious, and maybe a fresh pair of eyes can see after a few minutes what mine cannot after many hours If there is any more information you require, I will be happy to provide it, though I have tried to be as painfully complete as possible in this e-mail.
Anyway, I thank you for any time you can spare on this problem, and I look forward to hearing from you!
Regards, Michael Sipior
debug -detach /dev/ttyS1 38400 modem lock crtscts defaultroute asyncmap 0 mtu 552 mru 552
Try it with the -detach directive commented out.
#!/bin/sh # # Script to initiate a ppp connection. This is the first part of the # pair of scripts. This is not a secure pair of scripts as the codes # are visible with the 'ps' command. However, it is simple. # # These are the parameters. Change as needed. TELEPHONE=******* # The telephone number for the connection ACCOUNT=msipior # The account name for logon (as in 'George Burns') PASSWORD=******** # The password for this account (and 'Gracie Allen') LOCAL_IP=0.0.0.0 # Local IP address if known. Dynamic = 0.0.0.0 REMOTE_IP=0.0.0.0 # Remote IP address if desired. Normally 0.0.0.0 NETMASK=255.255.255.0 # The proper netmask if needed # # Export them so that they will be available at 'ppp-on-dialer' time. export TELEPHONE ACCOUNT PASSWORD # # This is the location of the script which dials the phone and logs # in. Please use the absolute file name as the $PATH variable is not # used on the connect option. (To do so on a 'root' account would be # a security hole so don't ask.) # DIALER_SCRIPT=/etc/ppp/scripts/ppp-on-dialer # # Initiate the connection # # I put most of the common options on this command. Please, don't # forget the 'lock' option or some programs such as mgetty will not # work. The asyncmap and escape will permit the PPP link to work with # a telnet or rlogin connection. You are welcome to make any changes # as desired. Don't use the 'defaultroute' option if you currently # have a default route to an ethernet gateway. # exec /usr/sbin/pppd debug lock modem crtscts /dev/ttyS1 38400 \
asyncmap 20A0000 escape FF kdebug 0 $LOCAL_IP:$REMOTE_IP noipdefault netmask $NETMASK defaultroute connect $DIALER_SCRIPT
Some of these options conflict with those your list from /etc/ppp/options file above. In particular I notice that the asyncmap is different. I also note that the MTU/MRU values you have listed are a bit odd. I usually see 296 for slower modems (14.4 and under) and 576 for faster modems (28.8 and up). The 'kdebug' option here results in those kernel/syslog messages from pppd (and the -v on your chat script, below, results in the syslog messages from that command).
Try it with an empty /etc/ppp/options file (that file is global and might conflict with the directives that you're putting on the command line). Try removing all of these options from the pppd invocation --- and isolating them into their own options file. Replace all the options on this long command line with just:
/usr/sbin/pppd file /etc/ppp/foo.options
... and put each option directive (and it's arguments) on a single line in the foo.options file.
#!/bin/sh # # This is part 2 of the ppp-on script. It will perform the connection # protocol for the desired connection. # exec /usr/sbin/chat -v \
TIMEOUT 3 ABORT '\nBUSY\r' ABORT '\nNO ANSWER\r' ABORT '\nRINGING\r\n\r\nRINGING\r' " '\rAT' 'OK-+++\c-OK' ATH0 TIMEOUT 30 OK ATDT$TELEPHONE CONNECT " ost: ppp ogin:--ogin: $ACCOUNT assword: $PASSWORD
This seems like an odd way to do this. I usually isolate my chat scripts in their own file and use my ppp/options file's 'connect' directive to invoke 'chat' with the -f option --- which points to my standalone chat script like so:
connect /usr/sbin/chat -v -f /etc/ppp/MYISP.chat
... with different files for different chat scripts. I also invoke 'pppd' with just the 'file' directive on its command line --- like:
/usr/sbin/pppd file /etc/ppp/MYISP.options
... and localize my options therein. My global options file then just has the "lock" directive --- or is blank (for some special cases).
I really don't see anything that jumps out at me. However, I've noted a couple of oddities. One other suggestion which relates to a similar problem I had once:
When you log in interactively, look for the last bit of plain text that's printed by your ISPs system before it starts printing the PPP "gibberish"
One of the ISPs I worked with would print "starting PPP..." after my script would enter the password. This was getting "stuck" in a buffer somewhere and confusing pppd (similar to what happens in C when you use a '' library call with a bad format specifier). The problem only showed up when I was using the chat script and not if I used 'minicom' to start the session, then quit out of that while leaving the connection up and using pppd to take over the existing connection.
Adding a last "expect" string to my chat script to "gobble that last text message up" seemed to solve the problem.
Try that and see if it helps. Then ask your ISP for some additional tips.
You might also try one or several of the GUI PPP configuration frontends. I've never used any of them --- but they've apparently gotten pretty good for the common cases. Any of the good ones should generate text chat script and options files that you can manually tweak.
The original thread appeared in Issue 36, "fconfig reports TX errors on v2.1.x kernels".
From Peter Bruley on Mon, 04 Jan 1999
Thanks Jim
I've posted the question to a few groups and have not yet heard any replies.
Peter
Actually I looked into it a bit more --- read the Linux Kernel Mailing List FAQ at http://www.tux.org/lkml and you'll find that this is a known problem between the new kernels and an older version if 'ifconfig' --- update your binaries as recommended in the LKML.FAQ.
Your "one-stop" shopping center for getting all the requisite user space program updates for your 2.2 kernel would be at LinuxHQ (http://www.linuxhq.com/pgmup21.html)
Hope that helps.
From cly on Mon, 11 Jan 1999
Hi! My problem is, that the system clock runs too fast, about 4 mins/3 days.
That's a pretty bad clock. However, there are ways to cope with it.
It's a big problem, because this server is time server for some workstations.
Are you using timed, xntpd or some other time synchronization server/protocol?
If you have a dedicated connection to the Internet, I'd recommend using xntpd --- and thus using the NTP protocol.
This is a complex protocol with largely inaccessible documentation. So far as the average sysadmin is concerned it should simply be a matter of installing xntpd on one or more Internet accessible (bastion) hosts --- such as your nameserver and external mail relay, and providing it with a suitable configuration file.
Mine looks like:
#/etc/ntp.conf
server nebu1-atm.ucsd.edu ## (132.239.254.49)
server ns.scruz.net ## (165.227.1.1)
server 127.127.1.0 # local clock (LCL)
fudge 127.127.1.0 stratum 10 # LCL is unsynchronized
driftfile /etc/ntp.drift
... note that the servers I've chosen are listed among the Stratum-2 (secondary) public time servers at the NTP web pages:
http://www.eecis.udel.edu/~ntp
... also note that you should ping and run ntpdate against any of these before you try to use them as one of your xntpd time source servers. (This list is sadly out of date --- and includes hosts which haven't responded to my pings and time requests in a couple of years --- and that's just from a sampling of the ones in California!).
But I'm getting ahead of myself. First you need to ensure that your clock is even close (within 1000 seconds) of the correct time before you load the xntpd daemon. So, during startup you should run the 'ntpdate' command to set your system time. (I also run the /sbin/clock -w command to write the system time to the CMOS hardware clock --- and have a cron job to repeat that command once a day).
Using this technique during startup you have your system time in the right ballpark. (The cron job also limits how far off your CMOS/hardware clock can drift).
Then you have your startup scripts load the NTP daemon after your networking interfaces and routes have been established. Then this daemon will periodically poll its time servers, measuring the networking delays and arriving at a precise approximation of the UTC time. I gather that the default is every 17 minutes. You'll see UDP traffic between port 123 on the clients and servers.
I recommend that you configure at least one exposed (bastion) server with xntpd and another one or two internal hosts which access the externally visible one. Then all of your internal systems can access the internal (stratum-4) time servers. If you have less than a hundred systems your external systems should probably refer to stratum-2 servers (to limit the load on the primary (stratum-1) servers).
You can also buy hardware clocks which xntpd can use to set the time. Some of them are radio clocks, other monitor GPS (global positioning system) or Loran signals (which would also be considered "radio" clocks I guess) and others are high precision clocks embedded on PC or other interfaces.
Thus, if you connect a GPS or Loran based high precision clock to one of your servers you can be your own stratum-1 time source. (If you go to the expense of buying one of these --- and they can cost over $1000 US --- I highly recommend that you make that server publicly available as a primary NTP server).
I gather that there are also modem based time services that are supported by the NTP package. I have yet to see any configuration examples for using these.
Note:
It has sometimes been the experience that the local clock oscillator frequency error is too large for the NTP discipline algorithm, which can correct frequency errors as large as 30 seconds per day. There are two possibilities that may result in this problem. First, the hardware time- of-year clock chip must be disabled when using NTP, since this can destabilize the discipline process. This is usually done using the tickadj program and the -s command line argument, but other means may be necessary. For instance, in the Sun Solaris kernel, this must be done using a command in the system startup file.
... in your case your system may require a bit of extra work to get xntpd working reliably. You're experiencing over a minute per day in slew --- so you'll almost certainly need read these details from the NTP home page.
As I've said --- the biggest failing in the xntpd package is that the documentation is written like a doctoral thesis. It add incredible complexity to a process that should be very simple to the "user" (the typical sysadmin, in this case).
Another problem with the whole system (protocol, utilities etc) is that it's designed for systems with dedicated Internet connections. No provisions or suggestions are made for those of us with dial-up (dial on demand) connection over modems, ISDN lines, etc.
My solution was to create a cron job that kill the xntpd on my internal time server once every day --- fired up my link to the 'net, ran 'ntpdate' against three different servers and then restarted the daemon.
This is specifically NOT recommended in the NTP documentation. They are concerned that the sudden change in time might confuse some daemons and processes. However, it seems to be the only choice for those of us that want to maintain reasonable time synchronization but don't have the money to spend on dedicated internet connections and/or hardware clocks.
You can find a list of those high precision time clocks at the NTP web pages. I'm must sorry that you'll have to muddle through all that erudite prose to get at the information you want.
(Meanwhile I have changed my network and I do have a dedicated connection (DSL) now. So if anyone wants to send me a good GPS PC/clock I'll be happy to set up an ntp.starshine.org public time server ).
My config:
Slackware 3.5 with 2.0.36 kernel on iP200MMX
What to do?
Cly
I hope that helps. I don't know if xntpd is included with Slackware --- but you can certainly find and build the source package from any good Linux archive site or from the NTP home pages that I've listed above.
From David Augros on Sun, 10 Jan 1999
Dear Jimbo,
You seem to have all the answers (most of the ones to the good questions anyway...), and I am sure your wife is as lovely as she is capable when it comes to formatting and scripting. But the fact remains that every month, TAG is replete with typographical and spelling errors that would make a school teacher blush. Now I realize that you perform this service as a gift to the Linux community, and let me assure you, we are most grateful to benefit from your expertise and experience. I always enjoy reading your piece, (and I think Heather's comments sometimes cut to the quick much faster than yours do, ... women's intuition I guess). But, James, my man, we really have to think about what this looks like to the rest of the world. Yes the web and all other trappings of the internet bring with them an historically unprecedented dynamic of ever new and ever updated and always changing information... of this I am not unaware, but you still really need someone to go over your article before publishing. The rules of grammar do not change between most postings of TAG. Even an incompetent editor would catch eighty or more percent of these errors. And I am not talking about the sometimes illiterate nonsesnse that you receive as email on a (most likely) daily basis, but your own answers to said mail. If there is nobody else to do it, then let me know and we will work something out. The fact is, I really can't stand to see another month's worth of quality TAG go out to the world in the sorry state it has been doing so for as long as I have been reading it. Once again, I think you are the man, and I just want to help out here. That should be what you walk away with.
My only complaint regarding your writing would be the utter lack of paragraph structuring.
As you've noted, my faults related to a balance between the time I can devote to the writing and editing vs. the time I reserve for other work.
I'm sorry for those typos that get through. On the whole of it I don't think my grammar is as deplorable as you seem to suggest. However, it's probably not perfect.
I'd welcome an editor with the time to correct the typos --- though I'm not sure how we'd arrange it.
I could ask Heather to read my work as she formats it, with full license to edit it. Her, script is getting pretty good, and she might find the time when I haven't flooded her with close to 100 separate messages. We'll see.
(Meanwhile I can understand your frustration to some degree. I'm fairly forgiving when it comes to netnews, e-mail and web forums --- but I find the number of typos in professionally published and printed books to be pretty irritating).
Warm regards, Dave
From The Answer Guy on Mon, 11 Jan 1999
I ate the fortune cookie first, then read what Jim Dennis copied me on:
Dear Jimbo,
You seem to have all the answers (most of the ones to the good questions anyway...), and I am sure your wife is as lovely as she is capable when it comes to formatting and scripting. But the fact remains that every month, TAG is replete with typographical and spelling errors that would make a school teacher blush.
All one paragraph? "Typographical and spelling" -- I think Strunk would frown. Calm down, have a nice cup of tea.
(Darn it, now I'll have to paint a speak bubble for myself. sigh)
[ Actually, I painted a couple bubbles, but I'm not sure which to use, and would rather hope I don't become a regular on the answering side. I'm kinda torn between an asterisk bubble (star, get it?) or a bubble half drawn by a paintbrush. -- Heather ]
Bear in mind that I make very little effort to correct the querent, only the AnswerGuy. Rewriting the query would reduce our readers' understanding of how the question was asked. I only correct the AnswerGuy in the context of reading the columns at a much faster rate than the average reader... so a few things slip through. Was any of it difficult to understand because of grammar? (Jargon isn't a grammar problem here -- people are asking about technical issues.)
As I noted in one of the messages this last month, these are real people asking, and a real person answering the question. Real people do not speak perfect Oxford English, even though some try.
Now I realize that you perform this service as a gift to the Linux community, and let me assure you, we are most grateful to benefit from your expertise and experience. I always enjoy reading your piece, (and I think Heather's comments sometimes cut to the quick much faster than yours do, ... women's intuition I guess).
And avoiding making them except to provide real content... I'm more of a GUI fan than Jim is, so have a smidge more experience with, as one querent put it, Brand X compatibility.
But, James, my man, we really have to think about what this looks like to the rest of the world. Yes the web and all other trappings of the internet bring with them an historically unprecedented dynamic of ever new and ever updated and always changing information... of this I am not unaware, but you still really need someone to go over your article before publishing. The rules of grammar do not change between most postings of TAG.
Neither do deadlines. I do wonder, though, if the translators that convert the Gazette into Italian, French, etc, make any effort to keep the "bad grammar" of many of the querents intact.
Maybe I'll run one of the translations back through Babelfish... I have reasonable evidence that its translations are terrible. It ought to be a good laugh.
Even an incompetent editor would catch eighty or more percent of these errors.
To edit for the purpose of adding HTML, and for the purpose of perfecting the grammar, are not the same thing.
And I am not talking about the sometimes illiterate nonsesnse that you receive as email on a (most likely) daily basis, but your own answers to said mail. If there is nobody else to do it, then let me know and we will work something out. The fact is, I really can't stand to see another month's worth of quality TAG go out to the world in the sorry state it has been doing so for as long as I have been reading it.
As the Gazette is completely under the LDP, you are of course welcome to correct it, including old issues. The web is not the print medium, so you do not really have to feel it is frozen on paper and irreparable, even if its publishing schedule deliberately follows a magazine format.
Considering your offer more thoughtfully, how are you at tight deadlines? We're talking 3 days or less here.
I really hope you're not planning to restructure whole sentences or paragraphs; they often make better sense when taken as a whole than when taken alone. Nor is perfect grammar always desirable; many of the world's classic novels get bad grades from Grammatik(tm).
Once again, I think you are the man, and I just want to help out here. That should be what you walk away with.
My only complaint regarding your writing would be the utter lack of paragraph structuring.
See splits, above.
As you've noted, my faults related to a balance between the time I can devote to the writing and editing vs. the time I reserve for other work.
I'm sorry for those typos that get through. On the whole of it I don't think my grammar is as deplorable as you seem to suggest. However, it's probably not perfect.
I'd welcome an editor with the time to correct the typos --- though I'm not sure how we'd arrange it.
I could ask Heather to read my work as she formats it, with full license to edit it. Her script is getting pretty good, and she might find the time when I haven't flooded her with close to 100 separate messages. We'll see.
I have always assumed I had license to edit, but I only correct fairly minor things. I'm trying to provide to the world basically the same letter the querent received. To change it too much, would mean we were becoming more of a "useful topics this month" column rather than faithful republication of your mail threads.
For example:
- I will not completely reformat sentences, but I will add the occasional spaced-out verb or delete doubles. (If this leads to the oft-bemoaned "passive voice" - tough luck.) These aren't that common.
- I make a sincere (but I suspect insufficient) effort to get the right "its"/"it's" since Jim's mental spellchecker seems to consider them equal. "There" and "they're" seem to get swapped occasionally too.
- Sometimes, URLs have moved since the answer was given.
- Occasionally my own mental spellchecker catches something out of place. However, usually I'm going too fast.
I don't run ispell against it because I'd constantly have to feed jargon to our dictionary. I don't have time for that. I don't even remember if I ran 'lynx -traverse' across the tree this time like I normally do, to check for broken links.
As a personal comment I consider any change to the original content to be gravy; my purpose in transmuting the messages to HTML is to retain the appearance of the original mail. In some threads, that's a lot of work.
(Meanwhile I can understand your frustration to some degree. I'm fairly forgiving when it comes to netnews, e-mail and web forums --- but I find the number of typos in professionally published and printed books to be pretty irritating).
Last I heard all of Linux Gazette is a volunteer, unpaid effort. (To my knowledge none of the authors and editors lack a seperate job.) Perhaps if it is ever "professionally published", i.e. put in book form, it will be sifted through for inocuous typos.
However, I suspect those wanting a more organized restructuring of the knowledge Jim has to offer will be willing to wait for his book, which is a paid effort, with paid editors.
Heather Stern
Use what talents you possess: the woods would be very silent if no birds sang there except those that sang best. -- Henry Van Dyke
From Heather Stern on Sat, 16 Jan 1999
All one paragraph? "Typographical and spelling" -- I think Strunk would frown.
I agree. I used to write term papers that way too. I'll probably never break the habit. But salt-water-taffy-wise, I think the message was OK.
Calm down, have a nice cup of tea.
Earl grey for me, thanks.
Was any of it difficult to understand because of grammar? (Jargon isn't a grammar problem here -- people are asking about technical issues.)
My point does not concern comprehension so much as presentation. If a questioner says something silly, ungrammatical, or can't spell to save his life, that's one thing. But when Jim's answers contain very preventable errors, it just looks sloppy, and it is this that I wish to address. It may be a very superficial point, but it remains a point nonetheless.
As I noted in one of the messages this last month, these are real people asking, and a real person answering the question. Real people do not speak perfect Oxford English, even though some try.
I agree that speech is informal, and I would never suggest that it is important to correct spoken grammar-- the whole "spoken" dynamic of usenet, email, and even TAG is a wonderful thing, and you are right to want to preserve it. But TAG is also something more than plain speech. These messages are archived and available for the indefinite future. Web publishing, though more liquid than other forms, is still publishing, and as such, it lacks the character of the spoken word which bounces off the walls and ceiling and seeps into oblivion. I say, leave the questioner to fend for himself, his crummy wording is his alone. But Jim's responses reflect the professionalism of TAG, The Linux Gazette, and more remotely, but still in a real way, the whole Linux community. Jim's column would benefit from a "typo filter," and the whole world would be just that much sunnier
<...snippage...>
Maybe I'll run one of the translations back through Babelfish... I have reasonable evidence that its translations are terrible. It ought to be a good laugh.
Babelfish is terrible, but it seems to be the best thing going for now. I have a perl script which gives a nice command line interface the said fish, and it has provided me with many good laughs. I can send it if you like.
Even an incompetent editor would catch eighty or more percent of these errors.
To edit for the purpose of adding HTML, and for the purpose of perfecting the grammar, are not the same thing.
Please understand that I in no way intended to imply that you were incompetent (or less than that as it seems you have taken it). This remark was meant to highlight the fact that no such editor is now in the loop, and that even a poor one would be better than none at all. I know the difference between formatting for HTML and general editing, and I understand it is the former for which you are primarily responsible. It was my intention to point out that noone is responsible for the latter, nothing more than that.
As the Gazette is completely under the LDP, you are of course welcome to correct it, including old issues. The web is not the print medium, so you do not really have to feel it is frozen on paper and irreparable, even if its publishing schedule deliberately follows a magazine format.
It is not so much my desire to have a "correct" copy of the Gazette for my own personal use as it is my desire to see the Gazette show its best face to the world. And that face is currently located at http://www.linuxgazette.com/issue36.
[ The top level index, http://www.linuxgazette.com/. probably would have been a better place to point. Oh well! -- Heather ]
Considering your offer more thoughtfully, how are you at tight deadlines? We're talking 3 days or less here.
Three days is more than enough time to do an old s/there/their/ hear and their, if you understand my meaning.
I really hope you're not planning to restructure whole sentences or paragraphs; they often make better sense when taken as a whole than when taken alone. Nor is perfect grammar always desirable; many of the world's classic novels get bad grades from Grammatik(tm).
First of all, Grammatik can do something unmentionable to something else, even less mentionable to the first unmentionable thing. Secondly, the kind of thing I am proposing here is like the following (from http://www.linuxgazette.com/issue36/tag/b.html):
change this: ... kernel core team has soundly reject suggestions that Linux adopt
to this: ... kernel core team has soundly rejected suggestions that Linux adopt
I have always assumed I had license to edit, but I only correct fairly minor things. I'm trying to provide to the world basically the same letter the querent received. To change it too much, would mean we were becoming more of a "useful topics this month" column rather than faithful republication of your mail threads.
For example:
I agree with/completely understand/fully support all of the above.
Last I heard all of Linux Gazette is a volunteer, unpaid effort. (To my knowledge none of the authors and editors lack a seperate job.) Perhaps if it is ever "professionally published", i.e. put in book form, it will be sifted through for inocuous typos.
Just becuase it is a volunteer effort does not mean that it has to be sloppy. The kernel was written and is maintained by a strictly unpaid army of programmers, and it is a beautiful piece of work. We should all hold ourselves to the same standards. God bless America... OK, I'll stop now.
However, I suspect those wanting a more organized restructuring of the knowledge Jim has to offer will be willing to wait for his book, which is a paid effort, with paid editors.
I will be the first one on my block to buy it, as soon as it is available, you can count on it.
All things end up somewhere, and here we are...
--Dave
From David Augros on Sun, 17 Jan 1999
[snip]
My point does not concern comprehension so much as presentation. If a questioner says something silly, ungrammatical, or can't spell to save his life, that's one thing. But when Jim's answers contain very preventable errors, it just looks sloppy, and it is this that I wish to address. It may be a very superficial point, but it remains a point nonetheless.
So Jim is supposed to be held to higher standards in just tossing off an answer than the world of people is when tossing off a question. Hmmm. I'm not sure I agree.
As I noted in one of the messages this last month, these are real people asking, and a real person answering the question. Real people do not speak perfect Oxford English, even though some try.
[ Specifically, in "TAG suggestions" last issue. -- Heather ]
I agree that speech is informal, and I would never suggest that it is important to correct spoken grammar-- the whole "spoken" dynamic of usenet, email, and even TAG is a wonderful thing, and you are right to want to preserve it. But TAG is also something more than plain speech. These messages are archived and available for the indefinite future.
It isn't graven in stone; if you want to apply edits, go for it, and send the corrected package to the editor of Linux Gazette. There may be a delay but she will probably post changes.
Web publishing, though more liquid than other forms, is still publishing, and as such, it lacks the character of the spoken word which bounces off the walls and ceiling and seeps into oblivion.
Actually, I suspect people like the Answer Guy column because he really speaks with them, not because he stands at a Virtual Podium and makes perfect Oxford English speeches. Although his words are kept from oblivion by their posting, I do not think they lose their spoken nature here.
I say, leave the questioner to fend for himself, his crummy wording is his alone. But Jim's responses reflect the professionalism of TAG, The Linux Gazette, and more remotely, but still in a real way, the whole Linux community. Jim's column would benefit from a "typo filter," and the whole world would be just that much sunnier
Well, tell ya what. I'll make more of an effort to clobber typos as I roll through the column. And we'll see if anyone else in the world even notices. If they do, and I am just not good enough at mopping them up, then we'll see what can be done about slipping a grammarian into the loop.
<...snippage...>
Maybe I'll run one of the translations back through Babelfish... I have reasonable evidence that its translations are terrible. It ought to be a good laugh.
Babelfish is terrible, but it seems to be the best thing going for now. I have a perl script which gives a nice command line interface the said fish, and it has provided me with many good laughs. I can send it if you like.
Nah, I have better humor sources for my usual fun. Send it to the 2cent tips if you feel inclined.
[snip]
As the Gazette is completely under the LDP, you are of course welcome to correct it, including old issues. The web is not the print medium, so you do not really have to feel it is frozen on paper and irreparable, even if its publishing schedule deliberately follows a magazine format.
It is not so much my desire to have a "correct" copy of the Gazette for my own personal use as it is my desire to see the Gazette show its best face to the world. And that face is currently located at http://www.linuxgazette.com/issue36.
And you seem to retain the delusion that it's burnt in and can't be changed now that it's posted. In fact, a couple of months ago when I discovered I'd broken some posted URLs, I sent the correction in, and pif they were corrected. I'd like to think this isn't just because I help edit HTML.
Considering your offer more thoughtfully, how are you at tight deadlines? We're talking 3 days or less here.
Three days is more than enough time to do an old s/there/their/ hear and their, if you understand my meaning.
If you're only going to do search-and-replace I am certainly not adding another human to the loop... 3 days, maybe 4, is the total deadline block, from the last posting until I've sent in a final package, and I usually post an interim or two. The interim postings are because we're usually darn close to late -- and I refuse to leave Marjorie high and dry with all of it if we have a last minute problem.
I really hope you're not planning to restructure whole sentences or paragraphs; they often make better sense when taken as a whole than when taken alone. Nor is perfect grammar always desirable; many of the world's classic novels get bad grades from Grammatik(tm).
First of all, Grammatik can do something unmentionable to something else, even less mentionable to the first unmentionable thing. Secondly, the kind of thing I am proposing here is like the following (from http://www.linuxgazette.com/issue36/tag/b.html):
change this:
... kernel core team has soundly reject suggestions that Linux adopt
to this: ... kernel core team has soundly rejected suggestions that Linux adopt
That's fair.
[snip]
Last I heard all of Linux Gazette is a volunteer, unpaid effort. (To my knowledge none of the authors and editors lack a seperate job.) Perhaps if it is ever "professionally published", i.e. put in book form, it will be sifted through for inocuous typos.
Just becuase it is a volunteer effort does not mean that it has to be sloppy. The kernel was written and is maintained by a strictly unpaid army of programmers, and it is a beautiful piece of work. We should all hold ourselves to the same standards. God bless America... OK, I'll stop now.
And you are not seeing the first edition of CVS source code these kernel hackers posted, you're seeing one man's code plus repairs from possibly hundreds of others. In the Gazette, the mail has been through exactly two people, except in the case of some threads, and there it may have gone through as many as five, except that it isn't the habit of mailing list readers to correct other people's grammar when quoting them.
The LDP license offers the same opportunity for all readers who are not deep C fishermen; thousands of eyes can read and correct the Linux Gazette, and every HOWTO and MINI-HOWTO can be given fresh polish. Many info pages and man pages could be improved as well; just send the fix to the package maintainer instead. In short - don't just tell us how wonderful the world could be. Go forth and make it prettier. You're on the right track in offering aid to us, but missing the big picture.
However, I suspect those wanting a more organized restructuring of the knowledge Jim has to offer will be willing to wait for his book, which is a paid effort, with paid editors.
I will be the first one on my block to buy it, as soon as it is available, you can count on it.
All things end up somewhere, and here we are... --Dave
So, I'll be putting a little more effort towards grammar this month. Any of you with a mind to it should pick a HOWTO, a MINI-HOWTO, or an old article of the Gazette or some other LDP item, and apply yourself to it. We'll clean up the open documentation of Linux like a bunch of Scrubbing Bubbles (tm?).
[ The Scrubbing Bubbles are a trademark of DowBrands, Inc. -- Heather ]
Folks, let me know if you notice
Heather Stern
star@starshine.org
Never tell people how to do things. Tell them WHAT to do and they will surprise you with their ingenuity. -- Gen. George S. Patton, Jr.
From Osborne A. Martin on Thu, 14 Jan 1999
Hello,
I am a Linux novice but successfully managed to load, configure and get RedHat on the net. However, I ran into problems when trying to close my connection. I am using the "exec pppd ..." command to make the modem connection. Everything is great here, but the thing doesn't want to disconnect. I use "ps ax" to find the running 'pppd' and "kill -9 <PID>" but I still don't disconnect. Any idea how to solve this one?
Thanks in advance, Osborne
Sounds weird to me. What if you just run 'pppd' (without the 'exec' command)? What user are you running the 'kill' command as? (If you get a "permission denied" or "operation not permitted" error --- it would be because pppd is setting itself into its own process group and running as 'root' --- while you are trying to issue the 'kill' command as an unprivileged user).
For a simple home system where console security is a non-issue --- just leave a 'root' shell laying around on one of your virtual consoles or in an 'xterm' and issue your 'kill' command from there.
You could install can configure 'sudo' to run a kill script as 'root' --- listing your normal login ID as one of the users that's allowed to execute this command. You could write an SUID perl (sperl) script or a small C wrapper to accomplish the same thing (but that requires more background than I have time to give at the moment).
Sometimes the fact that Linux is a multi-user operating system with a tendency to protect system processes and files from "normal" users can be a bit inconvenient. On the other hand it is the principle reason why computer viruses are virtually unheard of under Linux or any other form of Unix. (I've only encountered one case of virus infection "in the wild" in all the years that I've used Linux and none for any other version of Unix --- and that victim was just being silly).
P.S. Every Linux site should have in large bold letters at the top of the site; "stay away from win modems of any type and modems with the Rockwell driver set". I purchased one of each before buying a Zoom Modem that actually worked with my Linux box. I found this type of info. very hard to find when it should be shouted from the mountain tops.
Every responsible retailer should also ask if you're running Windows before selling you one of the blasted things. Every responsible manufacturer should clearly label the package as FOR WIN '95 AND WIN '98 ONLY.
At this point I have not sympathy for any losses of business that winmodem manufacturers suffer as a result of the RMAs (return merchandise authorizations) they get from sell these pieces of junk to us (and Mac users, et al).
It's not just a matter of educating new Linux users --- it's a matter of educating the whole industry; this is not an MS Windows world! (It never really was --- though a big chunk of the media and market place have been so deluded for the past few years).
From ktoyama on Thu, 14 Jan 1999
Dear Answer Guy,
Great forum of Q&A here at the Linux Gazette. Here is my problem.
I'm trying to use a US Robotics 28.8 (no winmodem) and it works fine under the linux console under windows 1-6. Once I start-up X it doesn't seem to connect to the modem and seems to lose the connection to the modem. I start up the pppd which invokes the chat script but the modem never does a connect. But if I quickly switch to (CTRL-ALT-F1) or and F1-F6 window, the modem will dial and connect. Then I switch back to X and there is a connection. I can check mail, view web pages, but then after about 2 minutes everything stalls and the connection is lost. If I switch to a console for 15-20 seconds the link restores it's speed and then I can switch back to X. Then the cycle starts all over again. Please help me in determining the root of the problem. Thanks.
Sincerely,
Kevin
My first guess would be that you have an IRQ problem. If you modem and your mouse are trying to use the same IRQ --- and your modem is inactively while you're at your text consoles (i.e. you're not using gpm) --- that would be the most likely problem.
Other problems are possible. Some video cards use IRQ 2/9 (daisy chained IRQ pair) which might cause conflicts while you were in graphics mode, while not causing any problem from text consoles.
Yet another problem might have to do with the system's overall computing power. If you have a high speed modem connection it could be that X takes enough of your CPU horse power that the serial driver gets starved for attention (although that would also suggest flow control problems).
Of course a 28.8 and any sort of Pentium (even a P60) should be reasonably well matched --- assuming you have enough RAM that you aren't thrashing to disk.
Does this only happen with PPP? What if you connect to a BBS (or dial-up shell), start a file transfer and then start X? If the transfer (zmodem, Kermit, or whatever) still runs smoothly for several minutes after switching to X --- it suggests some sort of networking problem. If not, try running a file transfer while starting a non-X graphics program (such as 'zgv' --- the SVGAlib .GIF and JPEG viewer).
Also try running a file transfer while performing "cut and paste" operations on your text mode VCs (run 'gpm' to do that). Transfer a couple of page fulls of a man page into an empty editor session ('vi' -- 'emacs' or whatever).
As with any problems with any daemons, look in your /var/log/messages. Are there any error messages being posted through the syslog subsystem? Try increasing the debugging output of your pppd by adding the debug and kdebug directives to your /etc/ppp/options file (as per the man pages).
Try posting the contents of your PPP options file(s) and the command that's being used to invoke it (which may over-ride many of the directives in the options file by listing conflicting options on its command line or pointing to a supplemental options file using the "file" directive).
Try a different video card and/or a different X server. (You could even try starting a "monochrome" X server).
It's also possible that the problem lies with some X application or "toy" ('clock', your window manager, etc) rather than with the X server itself. If the probably recurs while running 'zgv' or some other SVGAlib program --- then you can conclude that it has more to do with the hardware/drivers than with the applications.
With any troubleshooting process you want to try all sorts of things that help isolate the exact components (hardware and software) that are involved. Many of these tests may not be usable as "work arounds" but they can define the problem more precisely.
You can browse around under the /proc filesystem to find out a bit more about which IRQs are in use and you can use the 'procinfo' and similar commands to determin more.
(If this is a laptop running PCMCIA drivers -- for example --- then there are any other potential problems, as laptop hardware tends to be very quirky --- video and PCMCIA interfaces especially).
From R. Brock Lynn on Sun, 17 Jan 1999
Hi Jim,
We met briefly at USENIX '98. I sat in front of you in the Red Hat Admin Tutorial. I think you had asked me about bochs or something like that. But I haven't done anything with it for a while... limited drive space until just this xmas when I bought two brand new 10 gig IDE (ATA3) IBM Deskstar drives.
And I can't for the life of me get the full 10 gigs on each to be recognized! I get only a flat 8gig each!
I'm running Debian 2.0 Hamm, with Kernel 2.2.0-pre6 with a PPRO single processor board, made in 1995, with the latest BIOS upgrade my vendor has available, circa. Feb., 1997. (bought the thing in '97) Cybermax: www.cybmax.com was the vendor.
Anyhow, the darned IBM drives only show up under Linux as 8gig. To be precise here is output of "df": (I included the full output just in case the added data might be useful. Yep, I've got as many drives as IDE can handle)
# df Filesystem 1024-blocks Used Available Capacity Mounted on /dev/hda5 967677 880562 77116 92% / /dev/hda1 1028116 1017468 10648 99% /mnt/c /dev/hdb1 8609416 64304 8545112 1% /mnt/bigboy1 /dev/hdd1 8615982 64304 8551678 1% /mnt/bigboy2 /dev/sda4 98078 97394 684 99% /mnt/zip /dev/hdc 108240 108240 0 100% /mnt/cdrom
Not quite! You could have /dev/hdd --- for a total of four IDE drives on two channels. I've heard of people running more than that --- but I think that's just silly.
And according to "bc" 8545112 bytes / 1024 bytes per meg / 1024 megs per gig = 8 gigs
The c/h/s numbers printed on both drives: chs: 16383/16/63 lba: 19,807,200
Hmm. Those don't add up. But I'm not surprised.
I wish I knew how to calculate total space in megs using C/H/S numbers!
Sectors are 512 bytes. You multiple cylinders (C), heads (H), and sectors per track (S) to get the total number of sectors. Think of track as one head on one cylinder. That is to say that it is one concentric ring on one side of one platter.
That's all really a fiction since all of the high capacity drives in the last decade (everything over about 200Mb) have used "ZBR" (zone bit recording) and consequently don't physically have the same number of sectors per track out the outer "zones" (rings) of the platters as they do on the inner zones.
The drive electronics hide these details from the rest of the hardware so that the BIOS can "pretend" that it really is an even number of sectors on a given number of heads with a given number of tracks. The drives (SCSI and IDE) will "auto translate" into BIOS compatible disk addresses (CHS). (Actually SCSI controllers usually replace the BIOS routines that handle this --- but effectively the drive is still abstracting most of the details away from the controller and the OS).
The BIOS was only set to handle 10 bits of cylinder (1024 maximum), six bits of sector (per track) and eight bits of "head" which fits neatly into a 16 bit register and one byte register. Those were convenient for programming the 8086 based systems that were common about 20 years ago.
(They're pretty silly now).
In any event the famed 8Gb limit is derived from
max cylinders * max sectors * max heads = maximum total sectors
or:
1024 * 64 * 255 = 16777216
which we convert to Kilobytes, Megabytes and Gigabytes by:
16777216 / 2 = 8388608 (maximum total K) / 1000 = 8388 (maximum total Mb) / 1000 = 8.4 (maximum total Gb)
... note that we don't use 1024 to compute Mb and Gb. This is common practice among drive manufacturers (and unheard of for memory chips). That has been a matter of some controversy as those extra 24 K per Mb start to had up when you're doing them by the thousand.
I won't pretend to be authoritative on that subject. Let's suffice to say that given the original contraints of the BIOS addressing system the maximum addressable space (in 512 byte sectors) is between 8 and 8.4 Gb (depending on how you calculate your Gigabytes).
Over the years there have been various other limitation with parts of that. This trick of lying about the number of "heads" and claiming that there were 255 heads was the earliest way to over come the "1024 cylinder problem" --- which had lead to the early "540Mb" limit on IDE drives. Various different ways of accomplishing this were labelled EIDE and ATA-2. We no have ATA-3 and UltraDMA.
fdisk reports these numbers for each of the disks:
/dev/hdb: ===================================================================== Disk /dev/hdb: 255 heads, 63 sectors, 1232 cylinders Units = cylinders of 16065 * 512 bytes Device Boot Begin Start End Blocks Id System /dev/hdb1 1 1 1232 9896008+ 83 Linux native ===================================================================== /dev/hdd: ===================================================================== Disk /dev/hdd: 16 heads, 63 sectors, 19650 cylinders Units = cylinders of 1008 * 512 bytes Device Boot Begin Start End Blocks Id System /dev/hdd1 1 1 19650 9903568+ 83 Linux native =====================================================================
Strange I know that different numbers of cylinders and heads are reported for the two drives since they are identical models: IBM #DTTA-351010
The drive's electronics will take all of the parts of any address (CHS) that are presented to it and multiply them all together to get a "linear block address" (LBA). So It really doesn't matter what your CMOS says.
However, you probably have to add lilo.conf directives to pass the drive's true "geometry" to the kernel (so it will ignore the CMOS values).
Here is my /etc/lilo.conf in case it might help:
========================================================================= boot = /dev/hda # Device containing boot sector default = 2.2.0-pre6 # Default image to load prompt # Forces boot prompt timeout = 50 # Wait <val>/10 sec. after prompt then boot def image = /boot/vmlinuz-2.0.33 label = 2.0.33 root = /dev/hda5 read-only vga = 8 append = "mem=143M" image = /boot/vmlinuz-2.0.36 label = 2.0.36 root = /dev/hda5 read-only vga = 8 append = "mem=143M" image = /boot/vmlinuz-2.2.0 label = 2.2.0-pre6 root = /dev/hda5 read-only vga = 8 other = /dev/hda1 label = win95 table = /dev/hda ==========================================================================
First try adding the "linear" directive to your lilo.conf "Global" section.
See if that helps.
I have each drive in LBA mode in the BIOS with the autodetected settings. CHS autodetected match the numbers printed on the drive, but the BIOS only sees 8 gig I believe.
I just don't know what the deal is.
There is some rucus on "Ask Slashdot" about this same thing, how to overcome the 8gig barrier with Linux: but I'm at a loss for trying so many things.
http://slashdot.org/askslashdot/98/12/22/1143236.shtml
Perhaps you can help investigate this further, and finally put this problem to rest once and for all in the annals of Linux Gazette!
If there is any other info you may need about my system, please don't hesitate to ask...
And if I find a "Correct"[tm] solution, would you like me to post it to you for publication in LG? As it may be beneficial to many people. I will also post it to the maintainer of the Large Disk HOWTO (http://www.linux-howto.com/LDP/HOWTO/mini/Large-Disk.html) as well, for inclusion... if I actually get at a solution!
Actually, Andries Brouwer, maintainer/author of the LargeDisk mini-HOWTO already has a small section on the 8Gb Linux IDE limit at:
http://metalab.unc.edu/LDP/HOWTO/mini/Large-Disk-7.html
... this could probably use a bit of elaboration.
Basically it suggests that recent kernels (2.0.35+ and 2.1.90+) should automatically handle the large drives --- but that they do a sanity check when the reported LBA capacity exceeds from the C*H*S by more than a certain about. Presumably this sanity check is still byting you --- so it may be that you need to apply his suggested patch. (That replaces the sanity check with a stub that always returns the "O.K" value).
I suspect that adding the "linear" directive to your lilo.conf (and running /sbin/lilo to rebuild the maps from it --- of course) will solve the problem. If that doesn't work, try adding appropriate "disk=" parameters to the lilo.conf. Then try this kernel patch.
There is also a white paper on the so called 8.4 gig limit from IBM, in case that might also help give you clues... as I'm only stumped:
http://www.storage.ibm.com/hardsoft/diskdrdl/library/8.4gb.htm
It seems like you did a bit of leg work looking for the answer (so you get an A+ for effort). However, you probably should skim over the whole LargeDisk mini-HOWTO (even the boring parts).
Andries does mention the "linear" option in section 6. It's also listed in the lilo.conf man page (big surprise). Personally I think he might want to provide a bit more meat, even if it only re-iterates or repeats what he said earlier. Many people (including me) will just skip to the section labelled "8Gb IDE Limit." Some will not understand that they should be trying things from other sections of the same HOWTO.
Sincerely,
R. Brock Lynn
Debian 2.0
From R. Brock Lynn on Mon, 18 Jan 1999
Jim Dennis wrote:
># df >Filesystem 1024-blocks Used Available Capacity Mounted on >/dev/hda5 967677 880562 77116 92% / >/dev/hda1 1028116 1017468 10648 99% /mnt/c >/dev/hdb1 8609416 64304 8545112 1% /mnt/bigboy1 >/dev/hdd1 8615982 64304 8551678 1% /mnt/bigboy2 >/dev/sda4 98078 97394 684 99% /mnt/zip >/dev/hdc 108240 108240 0 100% /mnt/cdrom
Not quite! You could have /dev/hdd --- for a total of four IDE drives on two channels. I've heard of people running more than that --- but I think that's just silly.
Just out of mad curiosity, I wonder if you overlooked the hdd, or whether I'm overlooking the posibility of one more drive. (I also have a new IDE CDR I'd like to put in, but according to what I know, I'd have to take something else out. I think...)
I don't see hdc on this listing --- so I presume you have some other OS on it. I was thinking of 'fdisk -l' output when I was looking at this.
Hmm, I've got: hda (HD), hdb (HD), hdc (HD), hdd (CD) I think it's maxed out, but maybe you have a few tricks up your sleeve?
No. I was just too tired to be trying to write LG/TAG stuff when I read your message and tossed off my first answer.
>The c/h/s numbers printed on both drives:
>chs: 16383/16/63
>lba: 19,807,200
Hmm. Those don't add up. But I'm not surprised.
Yes, I found one solution that seems to have worked to give me the maximum space on the drives!
I have to give credit to Jason Gunthorpe <jgg@debian.org> of the Debian Project for this solution! (and also several other Debian and non-Debian people on the Open Projects IRC network.
(I frequently, or rather much more than frequently, "hang out" on the #debian and #linpeople channels of the irc.openprojects.net IRC server network, where also quite a few Debian developers and package maintainers "hang out". My handle is "bytor". Jason's is "Culus". The main reason I switched to Debian from Red Hat was the level of support I can get just being in the channel and asking questions from time to time. And I also help out newbies as well.
[Actually the system I'm using now is one that I converted in place from Red Hat 5.0 (upgraded from 4.2) to Debian 2.0. I wrote up a HOWTO and a tool, a short perl script, to help convert your passwd/group/shadow files from one system to the other (and all files on the system to reflect the new uid's/gid's) You can have a gander if curious at:
http://www.geocities.com/ResearchTriangle/3328/rh5todeb-howto.txt and
http://www.geocities.com/ResearchTriangle/3328/conversion-tools.tar.gz
Please feel free to include this in anyway at the Answer Guy or anywhere on Linux Gazette. I will one day write it up properly in SGML, and submit to the LDP... just not enough time recently. Maybe I should write a short article for LG? (and then RH would never consider me for a job ever again!)
This thread will probably get in there somehow.
I'm not sure we need another HOWTO for this issue --- although you might submit a set of patches and suggestions to the LargeDisk mini-HOWTO (and I think we might then upgrade it from a "mini-HOWTO" to a "full" HOWTO --- though that's a matter for Andries, Greg Hankins and whoever else is managing LDP HOWTOs these days.
I hope this doesn't put me in bad standing with the Red Hat guys! I think Red Hat is great! But I really wanted to try Debian and didn't have the resources to start fresh! It's working great! I'm about to do an online "apt-get dist-upgrade" to slink soon using this very system, the rh-->deb conversion guinea pig. ]
Nobody should apologize for which Linux distribution they are running.
Oh! You're saying you might release a package to help Red Hat users convert to Debian, and a HOWTO on that.
Anyhow, here's one more trick to put up your sleeve: (or what worked for me to make Linux see all of my big harddrives.)
The BIOS/CMOS is messed up anyway. At least mine is. It's several years old now. It can't handled drives over 8gig(calculated with 1024^n). It autodetects the "correct" numbers that are printed on the drive. But the numbers printed on the drive are actually bogus!
Like Andries and I have said 8Gb is the maximum that can be expressed in CHS format. However, much larger capacities can be expressed in LBA ("linear") mode.
chs: 16383/16/63 (incorrect number of cylinders to match the heads and sectors per track)
lba: 19,807,200 (this number I believe is the correct number of total number of sectors though.)
Yes! You're getting it!
LBA stands for "linear block addressing" --- which needs to be supported by your drive and your OS for it to work. (I suspect that you also need at least an EIDE controller).
Let's see what I've learned!
Total Bytes = [Sectors per track (S)] * [Heads (H)] * [Cylinders (C)]
* [Bytes per sector (512)] and
Total Bytes = [Total Sectors ("lba" on my drive)] * [Bytes per sector (512)]
These are good formulas to know... perhaps Andries can add this in an "appendix" to his HOWTO!
I think he walks through these calculations a couple of times already. He doesn't seem to show them in "formula" format.
Anyhow I can now calculate what the proper number of cylinders should be based on those formulas. (set both expressions for total bytes equal, and solve for Cylinders... yep I'm a math egghead.)
You don't care what the cylinders/heads and sectors are. You want to use "linear."
[Total Sectors ("lba" on my drive)] * [Bytes per sector (512)]
Cylinders(C)= ----------------------------------------------------------- [Sectors per track (S)] * [Heads (H)] * [Bytes per sector (512)] [Total Sectors ("lba" on my drive)] Cylinders(C)= ------------------------------------- [Sectors per track (S)] * [Heads (H)]
for me this is: C = 19,807,200 / (16 * 63 ) = 19650
(And that is exaclty what Linux sees at boot up, and what fdisk and cfdisk see ... after the fix Jason Gunthorpe suggeted was done)
And if I calculate Gigs, from either formula above, I get:
Total Bytes = [Total Sectors ("lba" on my drive)] * [Bytes per sector (512)]
= 19,807,200 * 512 = 10141286400 bytes = ( 10141286400 bytes / ( 1024 1024 1024 bytes/gig ) = 9.44 Gigabytes = 9671.48 Megabytes
At boot Linux now sees: CHS=19650,16,63 9671MB and cfdisk sees CHS=19650,16,63 9671.49 MB (right on the money!)
(I think fdisk will see CHS=19650,16,63 also, but I was suggested to use cfdisk instead of fdisk by Jason, as fdisk is no longer being maintained by the "upstream provider" as Debian calls them.
I blind copied Andries on my message to you and he pointed out that I should have ignored the CHS values in the example calculations that I showed.
Your ' fdisk 'output already shows the correct values.
Mystery unraveled! Wide Smile
But I still haven't said how I fixed my system:
Here's what Jason suyggested:
Wipe the partition table:
either"cat /dev/zero > /hdb"
and count ten seconds as it blasts away at the drive... you only need to wipe the first few K
or"dd if=/dev/zero of=/dev/hdb bs=1024 count=1024"
Actually a count of one and a block size of 512 bytes would have been sufficient.
I think that will wipe the first Megabyte of the drive that supposedly destroys the partition table.
The partition table is the in the last ~50 bytes of the master boot record (MBR) which is exactly one sector.
That's all you need to blow away.
Next, if you have a broken BIOS, like mine, completely disable the setup for your large drives... Linux will detect them anyway whether they are listed in the BIOS or not. (At least 2.2.0-pre6 did) I set the "Not installed" flag for both large drives hdb and hdd in the BIOS.
Hmmm. I think you want to look for an LBA, "linear" or "PIO" mode for the CMOS IDE settings.
Then I rebooted and BINGO, Linux reports the above CHS=19650,16,63 9671MB for both drives! (before with the BIOS crap enabled, Linux would see CHS=19650,16,63 for one drive, and CHS=1232,255,63 for the other drive. Strange I know.
I think the "linear" option would still do the trick. Most systems won't boot off of a drive that the CMOS has listed as "not installed"
And cfdisk worked for both of them and saw CHS=19650,16,63 9671.49 MB for both drives!
I think it should have shown that anyway. (Maybe it needs the "linear" option).
Next I partitioned each with one large partition, hdb1 and hdd1, and then formatted with mke2fs: "mke2fs -i 1024 -m 0 /dev/hdb1"
-i 1024 is inode density -m 0 says reserve none for "root only".
Bad idea! You should reserve a small amount to lessen the chances of damage to the filesystem when it gets full.
Try just 1% on these larger drives. You can use 'tune2fs' to change it (-m to express it as a percentage, -r to use blocks). You can also set the "reserved user/group" for that filesystem so that it's not just 'root' that can use the reserved space on a drive.
-c says to check for bad blocks, which I will do later once I settle down on a partition table I can live with.
Do it when you first create the partition. Otherwise some important chunk of data may land on a bad sector before you remember to do it with 'fsck'.
Course you know all that... (but I put in here for documentation... I will write Andries and ask him to add some of this to his HOWTO.)
It turned out that after the format, using the maximum "Inode Density" of 1024, (I'm kind of fuzzy in this point but...) I lost a LOT of space to inode overhead. "df" only saw about 8.2gig 9.44gig - 8.2gig = 1.24gig lost on both disks for a total of 2.48gig lost total!!! ... there was much pulling of hair and gnashing of teeth at that moment... until I was gently told that increasing the "inode density" number... that lowers the density, would help reduce the inode overhead.
Basically each file uses an inode. Any individual file can use a large number of data blocks. The total number of inodes and data blocks is set when the filesystem is created. Additional inodes (extents?) are also allocated to track indirect blocks (that is blocks of data that are aren't listed in the first inode --- but are listed on one of the inodes that links specially them.
If you set the ratio wrong you can run out of inodes when plenty of disk space is available. The filesystem will still appear to be "full" in that you won't be able to create new files --- though you'd be able to append some data to some existing ones until you needed more of these "extents" (indirect blocks).
You can use 'df -i' to measure the available number of inodes rather than the number and percentage of datablocks.
Basically you should only reduce the inode density if you know that most of the files will be large --- that you won't have alot of small files. Even then reducing it can be a bad idea. It is far more common to increase the inode density to handle lots of smaller files.
Think about it. Every file uses at least one inode. Multiple hard links don't use additional inodes, they are additional references to existing inodes. All file names (directory entries) are links to inodes (except for some symlinks which can be embedded directly into ext2 directory structures). So, if you have small files you run of out inodes faster than when you have large ones.
I then reformatted with:
mke2fs -i 16384 -m 0
And that time, after mounting the partition, "df -m" reported: 9547MB or 9.32gig, so the loss to inode overhead was reduced. (but of course I risk running out of inodes! So I may redue the inode number to something in between 1024 and 16384!) But this time the loss was: 9.44gig - 9.32gig = 0.12gig MUCH better!
I think that you're cutting it a bit thin. But let us all know how it works out as the drive gets some use.
I also have to thank DJ Delorie <dj@delorie.com> (author of the DJGPP port of gcc to DOS, and the compiler of choice for DOS Quake) for his kind replies to my email for help as well. He had posted on the Ask Slashdot thread about large hard drive problems.
He wrote in with the following:
------------------------------------------------------------------- c h s * 512 = total bytes 16383 16 63 * 512 = 8,455,200,768
For 10.1g, c would have to be about 19650. The LBA number is the number of sectors on the disk, so 19,807,200 / (16*63) = 19650, which is what you need to tell fdisk.
Disk /dev/hdb: 255 heads, 63 sectors, 1232 cylinders Disk /dev/hdd: 16 heads, 63 sectors, 19650 cylinders
255 63 1232 * 512 = 10,133,544,960 16 63 19650 * 512 = 10,141,286,400
Anyhow, the darned IBM drives, after formatting only show about 8.2gig. To be precise, here is output of "df": (I included the full output just in case the
Don't use df. The capacity it reports is less than the size of the partition due to the overhead of the ext2 file system (inodes, free block maps, etc). For example, my 2,096,451 block boot partition shows 2,028,098 blocks in df.
Yeah. It would be nice if the man page for 'df' not only warned you about the overhead but gave you an idea about the typical percentages to expect.
Heck! It would be even nicer if the 'df' command itself offered an option to print the percentage of overhead in inodes, badblocks, reserved space, and any other categories that might exist.
[regarding me being pissed at 10.1gig actually being 9.44gig:]
That makes me MAD! Theses guys are the cream of the crop... they make the hardware, they should know and use the proper "1024" rather than the 1000 multiplier! ooh that strikes a nerve! Anyhow...
Seagate always uses the 1000^n values, so you get what you expect. Most manufacturers tell you which measure they use.
But later I found out that -i 1024 was not the "cluster size" but rather inode density and increasing it to say 10240 would help cut down on the overhead of all the inodes and give me more space according to Jason. Haven't tried, but will soon. (but I fear running out of inodes... will have to experiment)
"inode density" is tech speak for "average file size". If you know how big the average file will be, you can make it so that you run out of space and inodes at about the same time.
That's a great simplication. It's absolutely true and doesn't explain the mechanism at all.
Yes, I plan to make a 10 to 20 meg /boot partition just for kernels at the front of the drive... I hope 20 meg is small enough to fit under the 1024th cylinder!
Your kernel is only 1Mb. One cylinder (~8Mb on most big drives) should be plenty.
Heh, perhaps I can sue IBM or the vendor in a local court in my hometown? over the difference between 1024 and 1000. And show that 1000 is not the proper multiplier in the world of computers? If nothing else just to prove a point that consumers don't like to be lied to!
Many catalogs explicitly state "1Gb=1000Mb" somewhere, to tell you which measure they use. Both are equally likely.
Which helped!
>I wish I knew how to calculate total space in megs using C/H/S numbers!
Sectors are 512 bytes. You multiple cylinders (C), heads (H), and sectors per track (S) to get the total number of sectors. Think of track as one head on one cylinder. That is to say that it is one concentric ring on one side of one platter.
That's all really a fiction since all of the high capacity drives in the last decade (everything over about 200Mb) have used "ZBR" (zone bit recording) and consequently don't physically have the same number of sectors per track out the outer "zones" (rings) of the platters as they do on the inner zones.
The drive electronics hide these details from the rest of the hardware so that the BIOS can "pretend" that it really is an even number of sectors on a given number of heads with a given number of tracks. The drives (SCSI and IDE) will "auto translate" into BIOS compatible disk addresses (CHS). (Actually SCSI controllers usually replace the BIOS routines that handle this --- but effectively the drive is still abstracting most of the details away from the controller and the OS).
The BIOS was only set to handle 10 bits of cylinder (1024 maximum), six bits of sector (per track) and eight bits of "head" which fits neatly into a 16 bit register and one byte register. Those were convenient for programming the 8086 based systems that were common about 20 years ago.
(They're pretty silly now).
In any event the famed 8Gb limit is derived from
"
max cylinders max sectors max heads = maximum total sectors
or:
1024 64 255 = 16777216
"
which we convert to Kilobytes, Megabytes and Gigabytes by:
"
16777216 / 2 = 8388608 (maximum total K)
/ 1000 = 8388 (maximum total Mb) / 1000 = 8.4 (maximum total Gb) "
... note that we don't use 1024 to compute Mb and Gb. This is common practice among drive manufacturers (and unheard of for memory chips). That has been a matter of some controversy as those extra 24 K per Mb start to had up when you're doing them by the thousand.
I won't pretend to be authoritative on that subject. Let's suffice to say that given the original contraints of the BIOS addressing system the maximum addressable space (in 512 byte sectors) is between 8 and 8.4 Gb (depending on how you calculate your Gigabytes).
Over the years there have been various other limitation with parts of that. This trick of lying about the number of "heads" and claiming that there were 255 heads was the earliest way to over come the "1024 cylinder problem" --- which had lead to the early "540Mb" limit on IDE drives. Various different ways of accomplishing this were labelled EIDE and ATA-2. We no have ATA-3 and UltraDMA.
Thanks a TON for the above information! Very helpful stuff!
The drive's electronics will take all of the parts of any address (CHS) that are presented to it and multiply them all together to get a "linear block address" (LBA). So It really doesn't matter what your CMOS says.
However, you probably have to add lilo.conf directives to pass the drive's true "geometry" to the kernel (so it will ignore the CMOS values).
I was pondering doing that, instead of twidling with with disabling the drives in the BIOS. As I might heaven help me, want to put NT, *BSD, Solarisx86, or BeOS on the drives as well, and they might require a BIOS entry!
I suppose now that I have the correct "bogus" geometries, I can
add that in lilo as:
' append = "hdb=19650,16,63 hdd=19650,16,63"
'
And then maybe reenable the BIOS entries? (Jason suggested once I got the drives partitioned and formatted correctly I might be able to reenable the BIOS settings so that DOS or other OS's would be able to see it... not sure on that though. But he warned me that possibly cfdisk or fdisk might not partition the drive to where the partition boundaries would land at places where DOS, NT, or other OS's might expect them to.
Another thing that was suggested by Jason, (something he says he's done before) is to take the drive to someone with a PentiumII MB (assuming they have a working BIOS) and partition with DOS fdisk. So you know the partition table is acceptable to DOS style OS's. (in case you ever have a need to fool with such things.) Then take the drive back to your broken BIOS computer, and then change the partiton types to Linux and Linux Swap, but not changing the boundaries. (dunno if you have to disable the BIOS entries of not first) and then it should *work*!
That's good advice. Think about doing a BIOS upgrade for yourself, too.
>Perhaps you can help investigate this further, and finally put
>this problem to rest once and for all in the annals of Linux
>Gazette!
>And if I find a "Correct"[tm] solution, would you like me to post
>it to you for publication in LG? As it may be beneficial to many
>people. I will also post it to the maintainer of the Large Disk
>HOWTO (> >http://www.linux-howto.com/LDP/HOWTO/mini/Large-Disk.html)
>as well, for inclusion... if I actually get at a solution!
Actually, Andries Brouwer, maintainer/author of the LargeDisk mini-HOWTO already has a small section on the 8Gb Linux IDE limit at:
http://metalab.unc.edu/LDP/HOWTO/mini/Large-Disk-7.html
... this could probably use a bit of elaboration.
Basically it suggests that recent kernels (2.0.35+ and 2.1.90+) should automatically handle the large drives --- but that they do a sanity check when the reported LBA capacity exceeds from the C*H*S by more than a certain about. Presumably this sanity check is still byting you --- so it may be that you need to apply his suggested patch. (That replaces the sanity check with a stub that always returns the "O.K" value).
Ah, I will look into that. If I reenable the BIOS entries and Linux starts to see funny values again, I'll try it.
I haven't had a working windows partition on my system for over a year now. I love Linux, but since I have all the space now with the new drives I decided I might want to try NT... the main interest being to experiment with Cygwin to get a Unix-like layer working for NT (in case I ever have a job with NT servers, I'll have experience in Unix-ifying them
I suspect that adding the "linear" directive to your lilo.conf (and running /sbin/lilo to rebuild the maps from it --- of course) will solve the problem. If that doesn't work, try adding appropriate "disk=" parameters to the lilo.conf. Then try this kernel patch.
Hmm, I'm not familiar with the reasoning behind the "linear" option. I seem to recall all SCSI disks need it? May try it also and see what happens. Is "linear" a global option to lilo, that affects all disks in the system, or a per disk option? I think it is global, but I'm not sure. And if global, would it adversely affect the smaller drives that have, up till now, worked well w/o that option? I'll have to investigate this.
It's listed in "Global Option" section of the man page. But I'm not sure.
>There is also a white paper on the so called 8.4 gig limit from
>IBM, in case that might also help give you clues... as I'm only
>stumped:
>http://www.storage.ibm.com/hardsoft/diskdrdl/library/8.4gb.htm
It seems like you did a bit of leg work looking for the answer (so you get an A+ for effort). However, you probably should skim over the whole LargeDisk mini-HOWTO (even the boring parts).
Well, thanks for the commendation.
I've just got to know the real answer! I'll go to almost any length to get at "what's really going on"
Andries does mention the "linear" option in section 6. It's also listed in the lilo.conf man page (big surprise). Personally I think he might want to provide a bit more meat, even if it only re-iterates or repeats what he said earlier. Many people (including me) will just skip to the section labelled "8Gb IDE Limit." Some will not understand that they should be trying things from other sections of the same HOWTO.
Yes, I have to admit I didn't read the whole thing, I skimmed a bit and focused on that short section. I'll give it another look, this time reading it carefully, and if I see that any of the things above are missing, I'll prepare and email, and send it off to him for inclusion in the next version.
Also, one other thing that I can do is try the Ontrack Disk Manager software for the IBM drives. It's similar to EZDrive, and is supported by Linux... only someone told me it wasn't supported by FreeBSD... and I want to expriement with it. As I was told this Ontrack disk manager install to the boot drive, even if it's not the drive that needs it. And gets loaded at boot time, before even the lilo code in the MBR gets called. It supposedly replaces the BIOS disk routines. This may be the better solution for Linux and NT but not if I want to try one of the BSD's. I will have to look more into this also.
I remember back when I needed EZDrive with my 486 to recognize the full 540meg drive I had back then. And was suprised when Linux detected and dealt with EZdrive properly!
I was surprised when they added the support for OnTrack EZDrive and a few others, too.
I still won't go near them. But its nice to know that we can.
Thanks for your reply! Will you write up an "Answer Guy" section detailing this question / problem in the next LG, or is it too involved?
It's certainly not my longest or most complicated thread. However, writing it up in a more organized fashion, as an LG article and as a set of suggested enhancements to the mini-HOWTO..
[ Once Jim's written it, it stays in. The only messages or threads I ever toss out completely are some with no Linux in them. But I do sometimes defer confusing threads until the next issue, so I can spend the first week of a month polishing them so they don't make me dizzy. This one's pretty close, but I think it'll do alright. -- Heather ]
R. Brock Lynn
From pat on Sun, 17 Jan 1999
[ Hmm. Jim scribbled a note here advising me to make sure to link the :Linux Tips and Tricks" site. I wonder where that could be?So, I guess I can't say there's any "one true", definitive Tips and Tricks site. I wonder what he meant?? -- Heather ]
- I tried the Google! Linux search (http://www.google.com/linux) but got hundreds of hits on The Gazette's 2cent Tips from LG mirrors.
- LinuxHQ (www.linuxhq.com) would probably know, but their ht//dig database broke when the webmaster upgraded, it's supposed to be fixed soon.
- Doesn't sound like he means the Linux KnowledgeBase (http://linuxkb.cheek.com/)
- He couldn't mean http://howto.linuxberg.com/, I don't think he even knows the TUCOWS folks started up Linuxberg.
- Even the Mini-HOWTO's are a bit large to be "Tips and Tricks" in my book (but you can check the LDP anyway at http://metalab.unc.edu/mdw/ldp.html to see what you think).
- I tried Linux Links by Goob (http://www.linuxlinks.com/) which has lots of great stuff, but the only hit on these two keywords found a page for Chinese Linux users.
- LinuxPowered.Com looks pretty handy for Linux newbies, but doesn't mention either word, since it has clearer categories.
Thanks for the tip
And even more as i'm reading each month your column in LG and you're great. I've learned many things with you.
I've also changed the url of ipchains as you pointed out.
That's good. Stale links are a bear.
One thing you could do is point to the Linux-TIPS HOWTO (or mirror it at your site --- and link to your own mirror), and provide a set of links to to the Linux Gazette "2 Cent Tips" columns (and to mine if you like). Since LG is under the LDP license you can mirror the whole set if you like.
[ OK, I guess he meant the specific HOWTO, http://www.linuxhq.com/HOWTO/Tips-HOWTO.html -- Heather ]
This will help bootstrap your site and help users get alot more tips.
It would be really cool if you or some volunteers went through the existing TIPS HOWTO and 2 Cent Tips and Answer Guy back issues and indexed (and or quoted) them into your organizational hierarchy. Granted, it's rather boring scutwork (read, cut, paste, wrap in HTML) rather than creative research and composition --- but your readership will get a huge bang for their buck.
The problem with my writing (vis a vis the Answer Guy) is that it follows no organization and is not sanely indexed (not counting the search engines). So I've written five or six hundred pages of useful stuff that is inaccessible to many of the key people that need it --- since they can't wade through all of the back issues to find it.
My wife has toyed with the idea of doing a "best of" cut and setting up a set of web pages devoted to it. If I was making enough money (or got some funding) I'd pay someone to do it.
Hmmm. I should provide a courtesy link from my column to your site. I'm copying this to my lgaz and star (editor) addresses to remind me.
[ Aha! Together with the referer below, I deduce (and my browser confirms) that he meansIf you want to submit a tip to Pat, use the link below. My tip for Pat ... add yourself to the Linux related search engines so people can find you. This column ought to help . -- Heather ]
- LTT: Linux Tips and Tricks
- http://www.patoche.org/LTT/
Thanks Patrick
On 06-Jan-99, took time to write :
Le Wed Jan 6 04:50:50 1999 depuis la machine 209.157.85.20 la fiche suivante a ete transmise :
Referer = http://www.patoche.org/LTT/submit.html?from=33
subject : rpm -Vp Verifies .rpm file vs. Installation sections : Security
From Martin Skj\vldebrand on Sun, 17 Jan 1999
Hi,
I'm trying to install Debian from floppies on my spare lap-top.
It's an old machine, an Compaq Contura 486/ 25 with 4 MB RAM and 80 MB HDD.
The installation goes well (mostly - it complains that the swap space cannot be initialized but it still is used, swapon during startup later on goes well). But after rebooting I get various memory errors.
The latest being 'bash fork: Cannot allocate memory' when trying to do anything on the machine.
This sounds more like there is a disk error (bad block or some such) that's somewhere in the area where you're trying to create your swap partition.
That would explain both the initialization failure (which I presume is an error message from the installation script's 'mkswap' routine) and the bash errors.
I've read and re-read the floppy install on low-memory systems. I've expanded the swap space to about 20 MB (should be enough) but it still complains about the memory problem.
If the error is near the beginning of the swap file/partition --- then you'll keep getting it now matter how much disk space you add to the partition.
Try invoking the mkswap command (which should be somewhere in your startup files) with the -c option (to check for bad blocks).
Any ideas? Is it possible to run Debian on a 4 MB RAM machine?
I don't know. That's cutting it pretty thin. I certainly wouldn't use 'bash' on a 4Mb system --- 'bash' is hardly a lightweight shell. Try 'ash' --- which is a simpler and smaller shell that's designed for use on rescue floppies, etc.
You'll certainly want to compile a custom trimmed kernel (on another system) for use in such a constrained setting. I wouldn't think that the Contura's were so old that you can't find additional memory for them. Bumping that up to 8 or 16 Mb will make a huge difference in what you can do with that laptop. Otherwise I'd really just use it with a few DOS programs (there are DOS versions many Unix utilities). The biggest disadvantage of DOS is that you don't get any TCP/IP networking (or when you load up a TCP/IP stack --- and a few drivers for mice, CD drives etc. it eats up so much "conventional" --- MS-DOS "special" memory that you can't run anything that you care about). If you really prefer a Unix-like environment you might find a copy of Minix --- which can run on PC/XTs and can certainly fit on a Compaq.
(Of course, a Linux kernel with TCP/IP networking and all other extraneous bits removed can boot in a little over 1Mb. This wouldn't be any normal distribution --- you'd want to use one of the micro distributions that's tailored specifically for low memory machines. For example on the "major-linux-archive-formerly-known-as-sunsite":
(Now known as metalab.unc.edu): we have
http://metalab.unc.edu/pub/Linux/distributions
... which lists:
smalllinux-0.4.0.src.tar.gz
... as one of its holdings. That's a 1.2.11 kernel with patches to support ELF binaries. There was also a 1.09 based kernel with similar patches that was called "Linux-Lite" or something like that. These are likely to be better suited to use on a laptop with less than 8Mb.
In alot of cases it depends on what you're planning on running. For example for some sorts of routers you'd want to use a newer kernel --- since it only has to run the kernel, the shell script to set up your routes and packet filtering rules and maybe a copy of syslogd (if you want to remotely log some sorts of traffic). For that you'd want a more recent kernel with a better TCP/IP stack and preferably with the more powerful IPChains packet filtering features (standard for the upcoming 2.2 kernel, available as patches to 2.0).
Anyway, good luck. Check out for-sale news groups to see if you can find a good deal on used Contura memory modules.
M.
From Dan Bell on Sat, 16 Jan 1999
I have been a windows users forever, and I got tired of the constant crashing so I have just installed Red Hat 5.2 on my laptop. I travel the world in the telecommunications business. I haven't had one crash since installing Linux. My problem is low resolution on the LCD screen when running Xwindows. Under windows my screen has an 800 x 600 resolution. The best resolution that I can get when installing Xfree is 600 x 480. This is using the probing feature of the installation. I know there must be a way to sharpen the characters an icons. However with my limited knowledge I can't seem to find the answer. Please help or direct me to someone who can help solve this simple problem.
Dan Bell
Personally I find X to be unusable until you can get up to about 1024x768. However, I rarely use any GUI so when I need one I need it to be pretty good.
(When I use Netscape change the "icon" bar to text only, and tweak as many of the setting to "unclutter" the window frames as possible. Then I size it to almost completely fill my current screen --- with the virtual screen panner peekout out above it. That's set to 3x2 --- so I can get to any of the three "top row" screens with just a click and to any of the others with two --- right click on the app title bar to "bury it" then the whole panner is available).
However, back to your question.
You don't give any details about your laptop. So, I can't give any specific suggestions. However I can give some general ones.
First look in the Laptop Support Pages:
http://www.cs.utexas.edu/users/kharker/linux-laptop
... This lists a few hundred models of laptops and provides details about the installation and use of Linux on them. Its an all-volunteer effort (like most of the best projects in Linux) so the reporting can be a bit uneven.
So, look up your model of laptop in that database --- or the closest that you can find. Also read through some of the entries for some other laptops (more or less at random) so you can some idea of general problems and common solutions.
One of the common problems with many laptops is the use of the Neomagic chipset. This is a proprietary chipset for which programming specifications are not openly available. Luckily there is a free binary-only XFree86 "server" for it.
Since you are new to Linux, and presumably Unix and X as well, I'll digress for a moment to clarify a point of terminology that causes greate confusion:
The X Window System is a communications protocol. You have a "display server" (consisting of one or more "screens" a mouse and/or sensor tablet and a keyboard) and a set of clients (various programs that request operations, such as the drawing of windows on the screen, or the reporting of mouse and keyboard events). The clients can be run locally (as most of us do with most of our Linux boxes most of the time --- where the client program is running on the same system as the server) or it might be running remotely (communicating over TCP/IP on port 6000 or so). In either event the client and server communicate through the X protocol over some sort of networking channel (unix domain or TCP/IP sockets).
Anyway, the software driver that responds to video requests for the "clients" (Netscape Navigator, xterm, GhostView, etc) is referred to as a "server." Thus we have different servers for different video cards. Technically I think that there would be different server for different combinations of mouse, keyboard and video cards --- but I think that the XFree86 implementation has been able to consolidate the keyboard and mouse support into a common set of libraries --- so only the video chipset support is sufficiently different between systems to warrant different drivers.
While looking at Kenneth Harker's laptop support pages you should also look in the documentation for your laptop (or contact the manufacturer and beat it out of their support staff). You want to know the video chipset (such as the CT65545 from "Chips and Technologies").
There is a whole section of KHarker's pages devoted to general info about XFree86 on laptops (for Linux and FreeBSD users, et al).
Finally, if these free resources fail you --- consider a commercial solution. There are at least two companies that provide commercial X servers for Linux. Since XFree86 is pretty good --- these companies specialize in laptops and proprietary video cards that won't play nice with the freeware programmers. (Naturally, it would be better for the free software and alternative OS communities to refrain from buying such hardware --- but some of us get stuck with what we've got, so...).
So the two sites I'd check are:
- MetroLink Inc. (publishers of Metro-X)
- http://www.metrolink.com
- Xig (formerly X Inside Graphics)
- http://www.xig.com
... and, of course, you can check the latest info on XFree86 by browsing around on its web site at:
http://www.xfree86.org
Its possible that your copy of X can drive your video card just fine even though the autodetection code doesn't do it. Unfortunately X configuration for those cases is still a bit of a black art (more art and magic than science).
From Erfan on Tue, 19 Jan 1999
Hi
Can you help me with my problem?
I have just started with RedHat 5.2 and it's the first time for me to work on any Linux systems. Everything seems to goo quite godd for the moment but I have one problem.
I have made 3 parts off my harddrive, one for dos, one for Linux and one for Linux swap. I have some files in my dos drive that I would like to acces under Linux, but how do I do that???
Login as ' root 'and issue a command like:
mount -t msdos /dev/hda1 /mnt/dos_c
... where /dev/hda1 is the first partition on your first IDE hard drive (replace that with the actual Linux device name for your MS-DOS partition). /mnt/doc_c is an arbitrary directory. Just make one under any convenient name. I actually use just /mnt/c for that.
-t (type) msdos is only one option. There are versions of this that support long filenames. However, your kernel might not be configured to support that and I don't have the time to go into all those details, here.
- For more details browse the UMSDOS HOWTO at
- http://metalab.unc.edu/LDP/HOWTO/UMSDOS-HOWTO.html
I have tried to start xdos, but a window comes and right away it desepears again!
xdos is an interface to DOSEMU --- a system for running DOS under Linux (technically it is not a DOS "emulator" since it runs a real copy of DOS --- but it is more of a system/BIOS emulator).
dosemu requires some configuration. (You have to essentially install a copy of DOS (MS-DOS or DR-DOS or FreeDOS or whatever) into an "hdimage" file --- which is a small, emulated boot disk. Read the DOSEMU HOWTO (http://metalab.unc.edu/LDP/HOWTO/DOSEMU-HOWTO.html) for more on that.
My x-window works fine and i'm using WindowMaker "the version that comes along wwith RedHat 5.2-cd" The computer is P2 233 32Mb 3200Mb "about 1000 Mb for Linux, about 50 Mb for swap and the rest for dos".
If you going to put the answer on the GAZETTE page please e-mail me and tell me that.
Normally answers to all mail to "linux-questions-only@ssc.com" and any mail to jimd@starshine.org that looks like "Answer Guy" material is published. I normally quote the entire message as I received it --- and I usually leave in all typos.
I consider it to be the cost of sending mail to me for free advice. If you really don't want your message posted, let me know. I'll forward that request to my editors.
On the other hand I also sanitize messages of most identifying information (particularly e-mail addresses). This is to protect my correspondents from spam (and I end up having to manually relay mail from other users to my previous correspondents as a result). I normally leave in a user's signature with their name as it appeared therein --- though I'd be happy to remove just a querent's last name, and corporate affiliation.
[ The script I use to aid my HTML editing tries to get your name from the headers. Sometimes I can tell it's wrong, and use your sig as a guide. If I can't tell what your name is, or any querent requests, I use Anonymous instead. I usually scrub corporate identities, unless they're mentioned elsewhere in the message. Sometimes I leave the fortune cookies in. -- Heather ]
The point of my answering questions via e-mail and republishing them on the web is to make them available to as many people as possible. People who want answers and complete privacy can hire a consultant (or post messages anonymously to the appropriate mailing lists or newsgroups).
Sometimes I also pull messages from newsgroups or mailing lists where I'm answering them anyway. I participate in those when I can (more of an addicition than a hobby really).
Thanks for everthing Erfan from Sweden
From William Smith on Tue, 19 Jan 1999
How do you perform a low level format on your hard disk, my system has a virus received when dwn loading from the net that keeps throwing it into safe mode. I have completed C:\format and re-installed windows, it ran great for six months and went back into the safe mode. I was able to get it back up and running, but I can't remember how to perform the low level formatting.
You don't actually need a "low-level" format. You can just "zero out" or "wipe" the drive.
Boot up Linux (Tom's Root/Boot at http://www.toms.net/rb) and issue the command:
dd if=/dev/zero of=/dev/hda
... to wipe out the whole first IDE drive, or
dd if=/dev/zero of=/dev/sda
... to wipe out the whole first SCSI drive.
Actually you could just blow away one sector on these drives using
dd if=/dev/zero of=/dev/sda count=1
It wouldn't make sense to do this to other drives (/dev/hdb, /dev/hdc, /dev/sdb, etc) since their boot sectors aren't referenced as code and you can reformat those drives with normal DOS or Linux commands to re-make your filesystems on them. However, you can issue this command for all of your drives if you like. In fact you should be able to do something like:
for i in a b c d; do dd if=/dev/zero of=/dev/hd$i done
... to get four IDE drives or
for i in a b c d e f g ; do dd if=/dev/zero of=/dev/hd$i done
... to wipe out all seven disks on a SCSI chain.
Can you assist me... Help!!!!!!!!!!! William
Now. I realize that you didn't ask about Linux, and you might have no idea why I'm responding to your question with a suggestion that involves it.
Before you write back to be to ask those questions --- DON'T.
I answer Linux questions. Microsoft sold you Windows 9x --- you can get tech support from them or you can find a free "Windows Answer Guy." I don't like MS Windows and I don't use it. I will not freely answer questions, from strangers that don't relate to the products that I do use and like.
[ Try Winfiles.Com, they have Tips and Howto areas. -- Heather ]
Linux, like other forms of Unix, is basically not susceptible to computer viruses. This is largely a matter of typical usage (they are multi-user systems which protect the system and most user accounts from most activities of individual users. Most Linux and Unix just don't run as "root" --- and consequently trojan horses and viruses normally cannot utterly cripple a whole system just because the guy at the keyboard ran them).
This is not to say that they are "safe" from trojans --- a trojan can still blow away or corrupt any files owned by the guy that runs them. But it's a lot better, in the long run, than the common case with DOS, Windows, and MacOS. I think it's worth the extra learning curve and the occasional inconvenience (of having to switch to another "virtual console" or window and log in as root).
So, consider getting a copy of Tom's Root/Boot. It's a relatively powerful Linux distribution on a single floppy with enough power and utility to be useful. There are several other Linux distributions that fit on one, two or three floppies, and run from RAM disks.
Consider trying a full blown Linux distribution (like Red Hat http://www.redhat.com, Debian http://www.debian.org, S.u.S.E. http://www.suse.com, Caldera, http://www.caldera.com, or any of the others). That will give you a choice. You'll have a basis for comparison and you then go back to (continue to use) Windows or you learn more about the OS that a few million others have adopted.
From Fadel on Fri, 22 Jan 1999
Dear Sir..
How are you?
I'm Writing to ask you How can I remove bad sectors HDD?
Please reply me as soon as you can.
yours
Fadel
I'm not sure what you mean by "removing bad sectors."
A "bad sector" is a portion on a hard drive we doesn't appear to reliably record data. That is to say that attempts to record test patterns to this location on the disk and read them back result in errors.
Some bad sectors are manufacturers defects on the surface of the disk (generally minor imperfections in the metal-oxide or other coating which is deposited on the disk platter during its manufacture). Before it is shipped a normal hard drive is thoroughly tested on the manufacturer's test harnesses to "map out" the initial set of bad sectors and to ensure that the number of them fell below a suitable threshold.
Back in the old days (about 5 years ago and more) it was common to see the bad sectors listed on a sticker on the drives housing. That was common with MFM and RLL (ST-506 interface) drives. However it is largely unnecessary with modern SCSI and IDE drives.
Modern hard drives have "extra" sectors on every track. These are automatically "mapped in" to replace bad sectors. This happens initially at the factory and (at least with some of them) automatically in normal use. The drive electronics on these sorts of drives are actually embedded microcomputers running a program to store (typically on a "hidden" diagnostics cylinder) the state of the rest of the drive.
Consequently most modern drives leave the factor with no "apparent" bad sectors (and a few extras per track). So they'd rarely need a bad sectors list. (Also if they had one it would be very difficult to use it in mainstream modern operating systems like Window '98 --- which has no option or way for you to supply a list of bad sectors to their disk formatting utilities).
In the case of Linux is is possible to supply such a list. However it is generally much easier to just run 'badblocks' which will scan specified portions of the disk's surface testing every sector and returning a list of bad blocks.
Normally you wouldn't run 'badblocks' yourself. As I've mentioned in past issues of my column, you normally supply -c options to the mke2fs and e2fsck commands (named mkfs.ext2 and fsck.ext2 on some systems). These options force these commands to transparently call 'badblocks', passing in the parameters specifying the partitions (disk regions) and reading back the results (the bad blocks). The resulting list of bad blocks is then stored according to the needs of the filesystem in question (ext2 in this case).
The ext2 filesystem uses a special sort of "hidden file" to which it allocates all of the bad blocks on the filesystem. The insures that those data blocks (sectors) will never be accessed or used for any other files.
Under MS-DOS we used manually name files suspected of containing a bad block (those which would cause the whole system to "hang" when we'd attempt to access them) with a name like BADBLOCK.001. Later Peter Norton, Paul Mace and others wrote utilities to help use test for and properly mark bad blocks.
Now, if you mean that you want to return badblocks to use I suppose the easiest method would be to make a new filesystem over the one that has the bad blocks. You could run mke2fs without the -c option and let it trip over any bad blocks on it's own. If there are blocks that were properly detected as 'bad' before --- it's typically a VERY BAD idea to try to use them to store data later. You can't selectively use the 'bad blocks' for "unimportant" data and you can't guarantee that the controller won't hang up the whole system (or drastically hurt its performance) during attempts to access these. (Sometimes blocks are "marginal" --- data can be stored there and read back with some retries and error correction. All hard drives use ECC --- error correction coding and automatically correct most bit errors in normal operation. However, a block is declared 'bad' when it passes certain thresholds, always requiring ECC and often requiring multiple retries. I don't know the exact details of those thresholds --- but they certainly differ among various drive manufacturers).
From D Pettersen on Thu, 28 Jan 1999
Dear Linux Guru:
Recently due to a win98 (yuck) crash I had to reformat my hard disk and reinstall both win/98 (yuck) and Red Hat Linux 5.2 (yeah). The reconfigure was going well with Linux until it came time to go on line. I can connect to my ISP with root and open and surf the net with Netscape Communicator. Since we know that a good Linux user does not surf as root there lies my problem. I can (as User) make the ISP connection with usernet , but when I try to open Netscape Communicator I get error messages usually improper DNS type. I configured the connection as I did before my reinstall so I can't figure out what I did wrong could you be of assistance.
Thankyou:
So if you open an 'xterm' or switch to a console prompt (using [Ctrl][Alt][F2] or the like) and you try (as your normal user) to use 'ping' or 'traceroute' --- does it give any error message?
If you use 'ifconfig' does it show that the (presumably PPP) link is configured? How do you know that your ISP link is actually working? What does your routing table look like? (Issue the command 'route -n' from any root shell prompt and cut/paste or redirect it to a temp file).
Do you have 'lynx' installed? Try running 'lynx' to see if this is a Netscape Communicator specific problem or if it is a network configuration issue.
From Cesar A. K. Grossmann on Thu, 28 Jan 1999
Hi!
There are a way to sign my communicator mail and news with PGP on Linux?
TIA
Probably not in the current code. You could get involved in the Mozilla project (http://www.mozilla.org) and help their open source community add the desired PGP and GPG (GNU Privacy Guard) support.
Personally I prefer to use packages that are written specifically as mail and news readers rather than trying to "drive screws with a hammer" using a browser as a mail reader and news cient.
Of course I could be wrong. I don't use Communicator's mail and news functions. However, I have to assume that you looked at all of the relevant menus and dialog boxes and didn't see any options in their UI. If that's the case it seems very unlikely that there is some sort of "hidden" interface to some "undocumented" features that will give you PGP support.
Of course you best bet would be to ask this question on the Mozilla developers list. They know much more about Netscape's Navigator (and presumably about Communicator) than I will --- and they are the most likely to add the features (assuming that they really aren't there).
From DrDave on Wed, 27 Jan 1999
Dear answer.guy.jim:
I'm not at all sure this is how one sends questions for the "Answer Guy" column, so if I'm guessing wrong, please let me know how I should do this before piping my message to /dev/rtfm.
Cute. You've guessed correctly on how to post questions. However, you don't normally "pipe" data into "device nodes" and you don't normally store scripts or executables under the /dev/ directory. So I might write a script to autorespond with "RTFM" --- but I'd put it in /bin or (more likely) ~/bin (a.k.a. $HOME/bin). If I had a magic "rtfm" device driver (sounds neat!) I'd redirect or 'cat' the message into it.
Still it's a clever turn of phrase.
Anyway, I've been a Linux user for all of about 72 hours now. The first 24 or so were spent trying to figure out how to recover from some faulty partitioning on my second drive, so we're really only looking at 48.
Do you ever sleep?
So, you can imagine that the "man" command is pretty vital to me...
Well, moments ago, I was running an X11 session and something terribly evil happened which left me unable to properly shutdown my system. When I rebooted, Linux complained about all sorts of problems. Through some miracle (hey, the Pope is in town... coincidence?) I was able to figure out how to manually run fsck as the boot messages suggested. It had to fix a couple of problems in /root, and about 50 zillion in /hdb8 which looked like they were mostly Netscape cache files. Once that was done, I was able to get back into Linux, and now everything seems (so far) to be working fine. Miracles again? Hmmm...
Anyway... that was a bit of a lie. The one thing that isn't working fine is my "man" command. Actually, the command runs just fine, but it can't find any of the appropriate files. In other words, "man ls" returns "No manual entry for ls." I tried locate man | less, thinking that maybe some of the things fsck put in lost and found were actually my man files, but no... those seem to be intact.
OK, you're the Answer Guy, so here's the question:
How exactly does man look up a manual page that you request? Knowing
something about that procedure would help me trace my way to the
problem, methinks.
I don't know exactly what the 'man' command does. You could read the sources to get some idea of that --- or you could run 'man' under the 'strace' and/or 'ltrace' programs (system call and library function trace utilities for programming and debugging). I suppose you could run it under 'gdb' (the GNU interactive debugger), too.
However, I can give you some general ideas (which will be far more productive than looking at the operations of 'man' through a microscope).
Your 'man' page sources (in groff format) are located under /usr/man in "chapter" directories named: man1, man2, etc. These sources must be processed by the 'man' command according to the method of access (printing or viewing).
The 'man' command maintains a set of cached pages that have been processed by the viewer. Technically I think it uses the 'catman' program to do this. Anyway, these are stored under the /var/catman/ hierarchy. One possibility is that you have some corrupt files under /var/catman.
I supposed there are many others. Your /usr/bin/man binary could be damaged, for example.
In any event it is probably easiest to simply re-install the 'man' package. You don't specify which Linux distribution you are using --- but I'll guess it might be Red Hat. To re-install the man package under Red Hat Linux --- mount your CD (probably by just issuing the command 'mount /mnt/cdrom'), change into the appropriate directory using the 'cd' command (no relation). That directory is likely to be /mnt/cdrom/RedHat/RPMS. Then issue a command like:
rpm -i man-2.3.10-19.i386.rpm
(where the actual filename will probably be different --- since this particular example is from a S.u.S.E. system which maintains its own collection of RPM packages).
If you don't have a CD but you do have Internet access you can use a command like:
rpm -i ftp://$SOME_SITE/$SOME_PATH/man-X.Y.ZZ-X...rpm
... and the 'rpm' command will fetch the file from the site and install it in one operation.
The process is similar for any of the RPM based distributions (Caldera, S.u.S.E.). For Slackware you find the appropriate binary "tarball" on your CD (or on any FTP mirror site). You'd then 'cd' to your root directory and extract the contents of the .tar.gz file using a command like:
tar xzf /mnt/cdrom/.../man-X.YY.Z.tar.gz
(or whatever).
Under Debian you'd use the 'dpkg' command (which I don't know well enough to provide an example of).
If you don't want to just blindly re-install; you'd like to find out a bit more about what went wrong, you can use any of the following:
Red Hat (and other RPM based systems):
rpm -V man
... this will query the RPM database for details about the files that are supposed to be installed as part of the man package and produce a "verification" report (listing any files that are missing, changed or have changed ownership, type or permissions).
rpm -Vp /mnt/cdrom/RedHat/RPMS/man-.....rpm
... this will "verify" the installed files against an RPM file. In other words, it doesn't rely on the local databases but checks the installation against an original source file.
Debian:
dpkg -C $PACKAGE_NAME
(I don't know most of the details on this. I'll have to get another system to run Debian on).
Slackware and other "binary tarball" installations:
cd / && tar dzf $TARBALL_FILENAME
(I hope it's obvious that these $XXXXs that I'm using in these examples are placeholders where you'll have to fill in real values as appropriate. I'm following a common Unix documentation convention of using placeholders that "look like" shell or environment variable names).
The 'tar df' command is (with or without the -z option) is an interesting one. It will describe "differences" between the .tar file (.tar.gz if used with -z, as in my example) and the filenames relative to your current directory. Since Slackware tarballs are relative to the root directory we precede the command with a 'cd'
A practical consequence of this 'd' option to GNU 'tar' (I don't think it's supported under most older versions of 'tar') is that you can also use it with your own backups. Thus if you backup a system using the 'tar' command to a tape drive, you can insert the tape, (rewind it with the command 'mt rewind' or 'mt -f /dev/st0 rewind') and use a command like:
tar df /dev/st0
... to report on all file changes since your backup (or to verify the integrity of the backup depending on what actually happened).
There are similar options to other forms of backup. The 'cpio' command seems to have no option for actually comparing full file contents and meta-data (ownership, permissions, etc) --- just a way to test "CRCs" (checksums). The 'restore' command can be used with its 'C' directive to verify backups made with the 'dump' command.
There are other, more sophisticated, ways to perform filesystem integrity testing (to isolate corrupted files, or detect sabotage). 'tripwire' is the most well known. After many years of being freely available it has now undergone a commercialization effort by one of it's original authors.
Thanks in advance for the answer, or the redirection to a place more appropriate to find it if that's the case.
David Brown
PS Supplemental Question: What do I need to know about all that stuff that fsck did to fix my system? I'd try to look up the rudimentary info about fsck in man, but...
Get your 'man' subsystem fixed or re-installed, then read more about it. You can also read the source code for the 'fsck' command --- and there is supposed to be a very technical description of the low-level ext2 filesystem internals in one of the LDP guides (probably the Programmer's Guide).
To learn more about Linux you can start with the guides on the Linux Documentation Project's web site (http://metalab.unc.edu/LDP). Also at this web site are a couple of hundred HOWTOs, and a few FAQs. These are the best introductory materials available for many of the specific topics that they cover (they are written by users for other users and generally give short "real-world" examples).
From DrDave on Thu, 28 Jan 1999
Jim:
I found the problem, which, it turns out, was unrelated to my system burp and forced fsck activity. It was actually related to a change I made in my ~/.bash_profile before the badness happened.
I installed QT, when I was thinking it would be nice to have an ICQ client running on my machine under Linux, and I was trying to get LICQ to work for me (no luck there yet.) One of the recommended changes to .bash_profile was improperly setting $MANPATH so it included only the QT manfile path. I commented those lines out, logged in again and now 'man' works fine.
I'm guessing that setting $MANPATH=/foo causes man to automatically run as if you'd typed 'man -M /foo', and the -d option reports what it sees in the man.conf file rather than what it would use if it were actually going to try to fetch an entry.
Thanks one last time... David
I should have mentioned MANPATH --- though I almost never use it. I thought about it but it didn't relate to the rest of your problem description at all.
In any event it's always a good idea to try commands from a "test" account when they aren't working from your normal login. There are a surprising number of problems you can create for yourself with bad or corrupt dotfiles in your home directory.
From Mark F. Johnson on Wed, 27 Jan 1999
Greetings Once Again Honorable Guru,
My newly acquired, but soon to be short-lived, reputation as a Linux sage is in danger. I have been helping my friend set up Linux on his PC at home. He was the one who waged the Winmodem battle I told you about. He is attempting to duplicate my success at dual-booting Windows98 and Linux (RedHat 5.2). When he uses the workstation install mode, everything works fine. But when we attempt a custom install, which I have done successfully numerous times, the install goes fine until the first reboot. Then the boot sequence stops after checking his partitions, with a message that reads "Unable to open a console". We have done everything identically to the method I used on my PC, which is a near duplicate of his PC. We have removed all the partitions and OS's, including Windows, repartitioned and reformatted the drive, verified that the available space equalled the size of the drive, and reinstalled Windows and then Linux. Still, no joy. Same message. During the custom install, we created a 300MB root directory, a 127MB swap file (he has 128MB RAM), and three 600MB (growable) directories (/usr, /home, and /opt). As I said, everything formatted and installed without a hitch until reboot.
I have searched the past Linux Gazettes for an answer to this problem, but I came up dry. Any help would be appreciated.
Unable to open console after reboot suggests a problem in your /dev directory tree. If the tty1 and other "virtual terminal" device nodes are inaccessible (you tried to put /dev as a symlink to some mounted filesystem or something like that) then I'd expect this error message.
You can get similar problems (error messages regarding utmp or wtmp files) if your /var/log doesn't get mounted --- or doesn't exist.
So, it could be some problem with the way you're structuring your filesystems. Boot from a rescue floppy and look around. Make sure that the /dev directory is on your root filesystem and that the /dev/tty[0-2]* devices nodes are there, and that the are proper character devices. An 'ls -al' should look a bit like:
crw-rw---- 1 jimd users 4, 0 Jul 26 1998 /dev/tty0 crw--w---- 1 jimd tty 4, 1 Jan 26 22:23 /dev/tty1 crw------- 1 root root 4, 10 Jan 7 17:41 /dev/tty10 crw--w---- 1 root tty 4, 11 Jan 24 18:18 /dev/tty11 crw--w---- 1 root tty 4, 12 Jan 25 05:42 /dev/tty12 crw-rw---- 1 jimd users 4, 13 Jul 26 1998 /dev/tty13 crw-rw---- 1 jimd users 4, 14 Oct 3 02:28 /dev/tty14 crw-rw---- 1 root tty 4, 15 Jul 26 1998 /dev/tty15 crw-rw---- 1 root tty 4, 16 Jul 26 1998 /dev/tty16 crw-rw---- 1 root tty 4, 17 Jul 26 1998 /dev/tty17 crw-rw---- 1 root tty 4, 18 Jul 26 1998 /dev/tty18 crw-rw---- 1 root tty 4, 19 Jul 26 1998 /dev/tty19 crw--w---- 1 jimd tty 4, 2 Jan 24 16:16 /dev/tty2 crw-rw---- 1 root tty 4, 20 Jul 26 1998 /dev/tty20 crw-rw---- 1 root tty 4, 21 Jul 26 1998 /dev/tty21 crw-rw---- 1 root tty 4, 22 Jul 26 1998 /dev/tty22 crw-rw---- 1 root tty 4, 23 Jul 26 1998 /dev/tty23 crw-rw---- 1 root tty 4, 24 Jan 26 22:44 /dev/tty24 crw--w---- 1 jimd tty 4, 3 Jan 23 09:09 /dev/tty3 crw------- 1 root root 4, 4 Jan 7 17:41 /dev/tty4 crw------- 1 root root 4, 5 Jan 7 17:41 /dev/tty5 crw------- 1 root root 4, 6 Jan 7 17:41 /dev/tty6 crw------- 1 root root 4, 7 Jan 7 17:41 /dev/tty7 crw------- 1 root root 4, 8 Jan 7 17:41 /dev/tty8 crw------- 1 root root 4, 9 Jan 7 17:41 /dev/tty9
... note that I define 24 of these ttys --- that's because I use twelve of them for logins and my X sessions (sometimes up to three of them) are on the next few, a copy of all 'syslogd' messages is on number 24, and I use the others with the 'open' command, or as target for redirecting 'tail -f' output and other logging operations. So I use alot more ttys than most people.
Now, the odd thing is that this is happening right after a fresh install. I almost always used custom (one of these days I'll learn to use Red Hat's "KickStart" package --- though every installation I do is different so it probably wouldn't help much).
So, I'd have to guess that somewhere you're forcing Red Hat to skip the installation of some vital package. It's hard to imagine how you're doing that. The only time I've come close to that problem is when I was experimenting with installing over FTP from a public Internet FTP site (that was very unreliable in Red Hat 5.2).
The obvious workaround is to install using their "workstation" profile and then to use the 'rpm' command to add and remove the packages to your taste after the intallation is complete and you've successfully rebooted.
One way to get a full list of packages that you hvae installed on a Red Hat (or other RPM based) system is to use the command:
rpm -aq
... which you can redirect to a file, of course.
If just the package names aren't enough, you can use a command like:
rpm -aqi
... to get a full list of packages with short (one screen full) describtion and some info about each.
So, you could create a package list using:
rpm -aq > /tmp/plist
... then edit that to delete the names of all the packages you want to keep. You can refer to individual rpm -qi screens for packages that you don't recognize by name by simply issuing commands like:
rpm -qi zircon-1.17-16
... (where zircon was a package name I picked at random).
(If you wanted to be clever you'd make a macro in your favorite editor to pull in the description of any package on which your cursor was sitting when you invoked it. In 'vi' that would be something like:
:map S mcyypI:r!rpm -qi ^[o^[k"cD@c^M^[`c
... (where S is just any key that you don't use much in 'vi' command mode. This macro sets mark 'c' and fills paste register 'c'. All of the ^[ are literal escapes and the one ^M is a literal carriage return; those are entered in 'vi' by preceding them with a ^v [Ctrl-V]).
So, using this macro you'd move your cursor over any package that you were wondering about, hit [S] (from command mode) and this macro would extract the "info" by "querying" the RPM database and insert the results into your editing buffer.
Once you've removed all the package names that you want to keep you could use a command like:
cat /tmp/delete.list | xargs rpm -e
... to try "erasing" (un-installing) everything on the list. Here I'm assuming you make a copy of your package list file to "delete.list" and edited that. Obviously you can use any filenames you like.
This might result in a list of error messages about how some packages could not be removed due to dependencies with other packages. There should be no harm done --- so this command isn't as dangerous as it might look.
After you've removed all the packages you don't want you can select various packages that you do want to add and simply use the 'rpm -i' command to install each of them.
This would be most easily done in a shell (rather than through an editor list). To save on typing I'd probably create a couple of shell aliases like 'q' and 'i' to query and install packages. Those would look like:
alias q='rpm -qp '
alias i='rpm -i '
Of course looking through a list of almost 600 packages one could get boring. You could narrow the list a bit by generating a list of the package names on the CD and comparing that to the packages listed in your database.
To do that can use something a bit like:
rpm -qp > /tmp/pkg.list rpm -aq >> /tmp/pkg.list sort /tmp/pkg.list | uniq
... since any package that is installed will be listed twice (once from the -qp listing and once from the -aq listing) the 'sort | uniq' step will leave you with a list of packages that are NOT installed. Note: This trick only works since you have just installed all the RPMs from this CD. If you had fetched and installed some RPMs from a different CD or from an FTP site then you'd have to use a different approach to weed out the "extras"
rpm -qa | sort > /tmp/pkg.inst rpm -qp | sort > /tmp/pkg.dir comm -23 /tmp/pkg.dir /tmp/pkg.inst > /tmp/pkg.not
... this is a better technique overall. The 'comm' command finds lines "in common" between two files. It normally prints three columns of output --- but we just want the first column (the names of packages in the "dir" that are not in the list of "inst"-alled packages).
Incidentally using the command
comm -13 /tmp/pkg.dir /tmp/pkg.inst > /tmp/pkg.not
... or swapping the names of the files should give us a list of all "3rd party" packages that we've installed. That is that it results in a list of files that are installed and for which there is no ".rpm" file in the directory listing. Obviously the fact that Red Hat stores all of its package files in a single directory on its CDs is pretty convenient here. However, even when we're using S.u.S.E. CDs (with several CDs to a set and RPMs scattered in number groupings) we can easily generate a single listing of all the packages from as many directories as we like.
(You can then print a list of those, or you could be even more clever, make a /tmp/pkglist/ directory and create a series of symlinks for each of the "not installed" package). Here's a command that will do that:
cat /tmp/not.installed.txt | while read i; do ln -s /mnt/cdrom/RedHat/RPMS/$i . done
... (execute this command from your tmp/pkglist directory!).
No you can focus on these packages --- issuing your 'q' and 'i' commands. Or you could just use the 'q' alias to read more about each package --- and remove the symlinks for each that you don't want to install. Then, when you're done you could just issue a command like:
rpm -i *.rpm
... to install every package that's still listed in your temporary link farm.
Of course I've mentioned a number of other 'rpm' command tricks in previous issues. However, to save you the time searching through the back issues of LG I'll recap a couple of them here:
"Verify" all the installed packages:
rpm -Va
... this produces a list of any file from any package that is "missing" or has changed (checking MD5 checksums, time stamps, ownership and permissions, etc). Unfortunately the output doesn't list the names of the packages from which these files came. You can get that by using:
rpm -qf $FILENAME
(for any of the files that were listed as modified or "missing" --- or for any file that was installed by any RPM on your system, for that matter).
The -qf option associates a file with the package that "owns" it.
This "Verify" compares your files to the installed RPM database. It's possible to keep back copies of your RPM database on removable media (though they will typically be too large to fit on a floppy, even compressed in most of my cases). You can use the '--dbpath' option to force the 'rpm' command to use a database in some other location (such as /mnt/ls120/backup or /mnt/zipdisk/rpmdb.bak/).
Another trick is to verify a package installation against the contents of a package file. To do this you use the command:
rpm -Vp $PACKAGE_FILENAME
... in a previous column I gave a script that would verify any of your installed packages against any RPM in the current directory. However, it occurs to me that this script was probably unnecessarily complex --- I could use the 'comm' command to simplify it somewhat.
In this case we'd generate our to lists of packages as before. We also build an "index" of the packages (matching the package names to the filenames) using a command like:
ls *.rpm | while read f; do echo $(rpm -qp $f) $f done > /tmp/pkg.index
Then we'd use a command like:
comm -12 /tmp/pkg.dir /tmp/pkg.inst \
| join - /tmp/pkg.index | cut -f 2 -d" " | xargs rpm -Vp
... this may not look simpler --- but it is much more elegant than the last version of this script that I posted. (I often forget about 'comm' and 'join' --- and I shouldn't). The 'comm' command in this case is just listing the packages in common (between our installed list and our directory listing). The 'join' command finds those lines in our index file that correspond to any of the package names we've listed (remember, package names and package FILE names don't have to match). The 'cut' command then simply "cuts" the filename from each line (that's "field" number two with a "delimiter" of a space; I could have used -e and "\t" on my echo command when I was building the "index" file to build it with 'cut's default delimiter --- though it makes no difference). Finally we pass the list of package file names to 'xargs' which builds a series of one or more 'rpm -Vp' commands by translating the arguments from its standard input into lists of arguments on the command lines it executes.
If we consolidate the code samples into a full script it would total about a dozen lines or less. (I think that's half of what it took in my previous example).
I've used a number of techniques like these to manage the large numbers of packages that I have installed on some of my systems. I use 'sh' (actually 'bash') enough and on enough different systems that I don't even keep most of these scripts --- it's usually easier to just type them on the fly then it is to remember where I have them and go fetch them.
I think I'll put this one together and forward it to the Red Hat team, to the maintainer of the Linux-Tips HOWTO, and maybe post it on my website.
(It would be nice if someone generated a list of comparable 'dpkg' commands --- since I don't have the experience with Debian, and I'd like to learn more about it).
More importantly I hope I've given some nice examples of shell scripting --- ways to use commands like 'uniq', 'comm', 'join', 'cut', 'xargs' and those ubiquitous 'while read' loops that show up in so many of my scripts.
(Actually I should note that my use of /tmp for all of this is atrocious --- since anyone using this in a script on a multi-user system would be vulnerable to horrible symlink attacks. Usually I use ~/tmp for all of these sorts of things).
It turns out that I've been asked to teach shell programming at a local community college. I've never done any professional teaching before --- and only recently did my first public lecture. It's kind of exciting for a guy with no college degree himself.
Regards, Mark
I'm working from my text terminal in the living room tonight --- so I couldn't view this site's content (it doesn't come across in 'lynx'). I often use one of the terminals in the living room while I'm watching TV, or when I have friends over. One of my friends decided to drop by and do some programming on his laptop, and Heather is working on something on her laptop. My office (with my X station) is too small and cluttered for all of us to hang out in there.
Maybe I'll remember to look at it some other time.
Mark F. Johnson Systems Administrator Maxwell Library Bridgewater State College
From Scott Bulau on Tue, 26 Jan 1999
Dear Jim,
I am in need of a way to secure a modem line (serial) of an assigned tty port, from dial out. This seems like an impossible task. Do you have any suggestions, words of wisdom? I'm running 2.0.35 currently, a Slackware 3.5 distribution.
You want to prevent some or all of your users from dialing out a modem that's on one of your serial ports?
That's easy. Just change the ownership on the device node (/dev/ttyS* and/or the deprecated /dev/cua*) and (possibly) on every installed program that uses the modem
Actually there is a minor complication here. Conventionally modem using programs are SGID to the "uucp" or "modem" group. That is to say that these programs execute as members of that group regardless of whether the user that started them was in the group or not. So the question becomes:
"How does one limit execution of SGID" programs?
If you strip off the world-execute bit with a command like chmod o-x, then you'd have to add the users who do need access to this program to the "modem" group. But then they wouldn't need to access your modem using the SGID program --- and they wouldn't have to respect the modem lock files or any other restrictions on the use of the device. So, we can't limit it that way.
We could make these programs SUID and change the ownership (rather than just the group assignment) of the device node. Then the devices wouldn't have to be group writable, and we could create a special group of modem users, assign our modem programs to that and add our authorized modem users to that group. However this poses a greater security risk. If someone subverts (tricks) an SGID program they can only do relatively limited damage. If they subvert an SUID program they can change the permissions and executable files owned by that program's account.
Hmm. Such a conundrum. The answer is pretty easy --- but I had to invent it myself. I've never seen it written up in any book or article (other than the ones that I've written).
THE WHOLE PATCH IS A SET OF ACCESS CONTROL POLICIES!
So, you create a directory full of your SGID programs. you can asign it to any arbitrary group. Make the directory inaccessible to "others" (mode 550 or 750 for example). Now, only the owner of the directory and members of the associated group can access any of the links (filenames) in that directory. You can replace the original file link (under /usr/bin or wherever) with a symlink to the restricted directory. That symbolic link can only be followed by members of the associated group.
You can even make two different "group restricted" directories --- associated with different groups. Each can contain HARD links to the same SGID world executable file. Members of either group can then access their link to the program, and thus execute it. Other users can "see" (or access or execute) the program.
You could also require that a user concurrently be a member of multiple different groups to access a program or other file. You just put one group limited directory under another. The whole path is a set of access controls.
Of course there is a downside to this. Let's say that you wanted to grant 'minicom' access to members of "staff" and of "wheel." So you create a /usr/bin/staff/ and a /usr/bin/wheel. Each is set to mode 750 and each as a hard link to the minicom program. You ensure that now other (world accessible) links exist to the program). Now these users have to use different paths to access the same program. This suggests that members of each group needs additional entries on their $PATH environment string.
Even though its not explicitly covered in any of the books I've read I'm sure some sysadmins sometimes use a scheme such as I've described.
That's not so bad. It's a bit confusing --- but then, so are "access control lists" (ACLs) as supported by Netware, NT, and some other versions of Unix. I note that the versions of Unix which support ACLs (Solaris, HP-UX, AIX, etc) make no use of them by default. Professional sysadmins almost never use them. This suggests that the stock Unix "permissions" scheme is enough for almost all practical purposes.
You have to do this for every program which is SUID or SGID to the "modem" group (or whatever group you assigned your /dev/ttyS node to). Many sites use the "uucp" group for this (since the 'uucico' command, from the UUCP subsystem was one of the first commands used for this sort of thing).
Thanks for a response, I know how popular you are.
Scott
From s.alexiou on Tue, 26 Jan 1999
I have RH 5.0 (2.0.32). Using their graphic tool, I created two /home accounts, me and guest, assigned UID and GID's and set passwords. The problem is, I can only log in as root. I looked for .nologin files, there seem to be none. I am attaching my /etc/fstab files. Thus, at the linux prompt If I try to login as any of these two users, I am denied entry(back to the prompt). This is not an issue of case sensitive.
Any ideas of what I am doing wrong? Sincerely, S.Alexiou
I have NO idea. I've gotten a rash of different reports of this sort. All involve Red Hat usually right after new installations --- no login from console, no login over telnet, no login as root, no login as anyone other than root.
Unfortunately all of these cases, so far, are being reported to me incompletely. Only sparse details ahve been provided (as above). I've mailed off troubleshooting suggestions and recieved no followup to explain them.
So, I don't get it.
You said you used their graphical tool to create two new accounts. One was named "guest" and the other was some sort of user name for yourself. You also said you set the passwords for these two accounts.
Let's try this: edit your passwd file. I personally prefer to use vipw for that --- but Red Hat 5.0 had a broken 'vipw' command (immediate segfault) and my fresh installation of 5.2 also has a broken 'vipw' command (needed to add a symlink from /bin/vi to /usr/bin/vi --- GRRR!). So, just use your favorite editor and keep a rescue floppy handy in case you reboot the system with a corrupt /etc/passwd file.
Make sure that the entries you tried to create made it into the passwd file. Send me a copy of it if you still can't get it to work. Try setting the account passwords to something simple like just "x" --- and use the /bin/passwd command, not any sort of curses or GUI front end. Consider removing 'linuxconf' (for troubleshooting).
If you're using shadow passwords try running pwunconv and if you're not, try running pwconv (to convert your passwd file to or from shadow format).
Please, let me know if you figure out what's doing it.
From Swearingen on Tue, 26 Jan 1999
Is there a way that I can tell Linux (Red Hat 5.2) how much RAM my machine has?
Yes.
(The churlish imp in me would love to just leave it at that --- but I supposed you'd actually know HOW to do it).
Your kernel is reponsible for all memory management under Linux. You can pass parameters to your kernel in a number of ways (depending on how you load it). The most likely scenario is that you are using LILO (the LInux LOader). This normally gives a brief prompt, at which you can type in a variety of parameters.
Read the bootparam(7) man page and BootPrompt HOWTO
(http://metalab.unc.edu/LDP/HOWTO/BootPrompt-HOWTO.html)
for details on the range of parameters that can be entered. You can also set environment variables which will be inherited by the init process (and thus by all other processes).
You can type in the mem= parameter there to over-ride the kernel's automatic memory detection and supply your own value. That will just affect one session (useful for testing your system to make sure that it will work with the value that you propose). To make this change persistent you can edit the file /etc/lilo.conf and add a line like:
append="mem=128M"
... note: The "append" directive in the /etc/lilo.conf "appends" a string to the kenrel's command line (invocation) so you can have multiple append directives, and I think you can put multiple parameters within one append= directive (all separated by spaces and enclosed with the one pair of double quote signs). You do need the quote signs and the M (for Megabytes).
I've covered this before. Earlier versions of the Linux kernel couldn't reliably detect memory above 64Mb on some (most?) systems. However, newer Linux kernels (2.0.36 and the new 2.2.0) should detect your full memory capacity automatically.
Of course I'm only guessing at the symptom that you're trying to address. I do know of people who maintain boot images with LESS memory than they have installed. This is usually done by software developers to allow them to test their packages under artificial "low memory" and "swap thrashing" conditions. This can be done exactly as I've described above.
Note: I hope it's obvious that we're talking about real memory (real chips and SIMMs inside your system) here --- and not about "virtual memory" (paging/swap space). The way to increase or disable your swap is to create a swap partition or a swap file (technically its really a "paging" partition or file --- but the term swap is misused throughout the libraries, sources, and documentation).
You can run the command "man -k swap" to learn about the commands and configuration files that relate to swap files and partitions.
If you tell the kernel that your system has more memory than it really has --- you'll almost certainly crash, almost immediately.
From cly on Mon, 25 Jan 1999
Hi! I used: gzip -cr * > file.gz
Then I deleted the source... Now I have a file.gz, but how can I get back the files and the directory structure?
Cly
Basically there is no way (short of deep magic with a hex editor --- which is beyond my current skills(*)).
* (I used to do data recovery for the Peter Norton Group using DiskEdit on MS-DOS/FAT filesystems. However, I've never developed those skills on ext2 filesystems and the available interactive tools don't seem to be as advanced as the versions of MUSE and DE that I used to use).
So, I'd say that you'll have to use your most recent backups or recreate the files from scratch. Certainly you can look at the "Ext2fs Undeletion mini-HOWTO" at http://metalab.unc.edu/LDP/HOWTO/mini/Ext2fs-Undeletion.html for some suggestions.
From Alan Richard on Fri, 22 Jan 1999
Upon further investigation, I see that the 2.1.90 and later kernels have implemented RFCs 2018 and 1323. I found this on the www.psc.edu/networking/perf_tune.html page.
Thanks anyway, Alan
Thanks for following up some quickly with the answer to your own question. I was going to have to hunt through kernel sources and the kernel mailing list if I was going to answer this one.
To give you and idea of just how ugly that would be let me ask:
What is the TCP/IP SACK feature? What does it do?
Why do we need/want it?
It the Linux implementation any better or worse than others? (Or is it some feature where you pretty much either have it or you don't and there is no "better" or "worse")?
Alan Richard wrote:
Hey AnswerGuy,
Do you know anyone with a good implementation of SACK for Linux? I'm running RedHat Linux 2.0.36. I've searched the web a bit under TCP, SACK, and RFC 2018, and have yet to find any patch available for download.
My officemate, Mark Allman, is the co-chair of the IETF TCP Implementation Working Group. He says that SACK and Large Windows (RFC 1323) are now the standard for TCP, with Windows98 and Sun 2.6 having them already implemented. Where is the Linux community with respect to implementing these? (Mark would like to know, too.)
Thanks
h4>"Linux Gazette...making Linux just a little more fun!"
As the community effort to develop a Linux certification effort matures, we need your help to move the process to the next level. It seems hard to believe that it's only been four months since the October LG article that launched this particular initiative. In that time, we have gathered together over 120 people interested in developing the certification program, joined together with another group that was working on certification since the spring of 1998 and have moved the whole process along quite far. (See my November and December articles for a history of the process.) This month's article will address:
This month we are pleased to announce a new web site describing our proposed certification program:
http://www.linuxinstitute.org/
Please visit the web pages, read about the program we are proposing, and jump on board to help us out!
Credit for the site (and thanks!) is due to Evan Leibovitch who set up the site, established the domain name, and is serving as our webmaster.
I also have updated a web site I established a few months ago to provide a central listing of Linux training resources at:
If you are a training provider, courseware vendor, or independant instructor, please visit that site and submit a listing so that I may include you on the list.
After much discussion, we have arrived at a mission statement that defines the goal of our certification effort:
We believe in the need for a standardized, multi-national, and respected program to certify levels of individual expertise in Linux. This program must be able to satisfy the requirements of Linux professionals, as well as organizations which would employ or contract them.
Our goal is to design and deliver such a program from within the Linux community, using both volunteer and hired resources as necessary. We resolve to undertake a well-considered, open, disciplined development process, leading directly to the establishment of a recognized and widely-endorsed Linux certification body.
Thanks are due to Evan, Chuck Mead, Tom Peters and a number of other individuals who hashed this out on the linux-cert mailing list.
As part of our effort to build this new web site, we are sponsoring a contest for a logo for our project. Several entries have already been received. Please visit http://www.linuxinstitute.org/tli/logos.html if you have an interest in creating a graphic for the site.
We need you! If we are to pull off a program of this size and scale as a community effort, we need the help of everyone who may be interested in having a professional certification program for Linux. Whether you have a large or small amount of time to help... whether you are a Linux "guru" or a "newbie"... you can help make this program a reality!
To help out, you need to join one or more of our mailing lists. Before you decide how you can help, please read about our proposed program (which has been arrived at over the past four months of discussions) and the structure we are building to move the whole process forward. I would suggest you also browse the archive of our linux-cert mailing list to understand the discussions we've had to date.
After reading our information, please plunge on in, join a list (or lists) and help us out!
It's been an exciting time for us all. We've had some great debates and argued many philosophical and practical points. We have a lot more to do - and will doubtless have many more debates ahead of us. But above all, it's been a very professional group of people focused on getting a program accomplished! The market has changed, too. There is no longer a question of should a Linux certification exist (which our group never debated - we have only asked for people to be involved if they want to see a certification program happen), but rather who will define that certification program. Will it evolve out of the community? Or will it be specified by a vendor or distributor? We believe it should come from the community and we hope you will join us in that effort!
Please join us on the list(s) and let's make this happen!
"Thou hast to recompile thee kernel".
This antique curse has been thrown on every Linux newcomer since the birth of Linux. Unfortunately as long as kernel recompiling is deemed a necessary part of a Linux installation it will be impossible to spread Linux between non-nerds. In this article we will make a detailed analysis of the performance increases one can expect of kernel compiling.
To begin with we will see module compiling. Compiling a module in the kernel will save a little more than 2K per module: 2K due to page alignment and a small bit of code for the loading, unloading of the module. Now, despite being a module fanatic I never managed to be in a situation with more than ten modules loaded, but let's imagine you have 20 modules loaded and all of them are needed permanently so you recompile them in the kernel. You would save 40K of memory, that is 0.5% of the memory of an 8 Meg computer.
Now we will look at benefits of a lean kernel. When Matt Welsh wrote
his books kernel recompiling was undoubtedly necessary. It was not uncommon
to be able to save above 1.5 Megs of memory and your average computer had
8 Megs of RAM. Thus recompiling would increase memory available from 5.5
to 7 Megs that is a 27% increase.
But people failed to notice that Linux has gone modular and computers
got more memory. Today most distributions ship modular kernels so recompiling
will get benefits much smaller than in 1995. As an example I tested recompiling
the kernel shipped in RedHat 5.2 with everything unneeded thrown out and
modularizing everything else when it was possible. The boot messages (that
is before loading of any module) showed I had saved a mere 400K. In addition
today even low end computers have 32 Megs of RAM that means that recompiling
your kernel will increase your available memory of only 1.25%
It is possible to write a specially designed program who will not do a single page fault with N Megs of memory and thrash horribly if you reduce it by a single page. However in normal situations a 1.25% increase in memory available will make little difference. There ARE still a couple distributions who ship kernels good for little else outside installation: huge kernels lacking essential features so recompiling is not a performance issue but a requirement. Now consider what happens if a small company without a full-time guru needs a firewall. Its expert is good for little else short of starting Word. If he stumbles upon a distribution with one of those broken kernels he will fail and will end recommending NT.
Most modern distribs (Caldera, Suse, RedHat and their clones) ship fully-featured kernels and in addition kernel recompiling will produce no appreciable speed increase due to memory savings: they are good enough out of the box. Only a couple of "hackeristic" distribs will force you to recompile the kernel. But for the good of Linux you should ask the maintainers to fix them instead of supplying for their deficiencies. YOU can recompile but your neighbour cannot and he will choose NT.
Again let's quantify this. Linux performs a number of optimizations for CPU type but most of them are performed at execution time and don't depend on compiling options. For one part we will quantify the influence due to alternative portions of code being compiled and we will also take a look at the influence of compilation options in the code generated by GCC.
Let's look now at the circumsatnces where selective TLB invalidation
has a significant effect and let's quantify the slow down.
First of all if the kernel unmaps a page and then handles control to
another process it will reload CR3 and that will cause a total TLB invalidation
(different processes have entirely different mappings) so you get any benefit
only if control is handled back to the same process either immediately
or after some time in kernel mode. Also consider that time wasted due to
entire TLB invalidation is some microseconds while disk IO takes 10 milliseconds
in best case that is one thousand times more. That means in case there
is disk IO following this unmapping (due to swap out) benefits would be
unsignificant.
In fact about the only case where selective TLB will be meaningful would be in the following scenario: process frees memory so the kernel will invalidate TLB, it handles control to the same process and then the process scans a large array doing only a single access for every entry, then just when the TLB is fully reloaded, it unmaps memory again, new TLB invalidation, kernel gives back control again and then the process scans the same array entries. Highly theorical and don't forget that during the second pass page entries will be in cache so address translation will be much faster and this will reduce benefits got due to selective TLB invalidation.
Let's evaluate what happens in a normal process. We will arbitrarily assume this process runs for one tick (10 ms) after the unmapping. For everything else we will take the worst case. The slower the memory the more costly is translation so we will assume this computer uses 60 ms DRAM instead of SDRAM. The larger the TLB the bigger the benefits of selective invalidation so we will choose a CPU with a big TLB in our case it will be an AMD K6 model 7: it has a 64 entry TLB for code pages and a 128 entry TLB for data pages. We will also assume that we never find nor page table entries nor page directory entries in cache (the later is very irrealistic because a single directory entry is used every 4 Megs of address space) so every translation will need 2x60=120 ns so the complete refilling of the TLB needs 120 ns * 192 TMB entries = 23 microseconds. Because we assumed the process would be running for a whole tick that means the slowdown due to address translation is only 0.2 per cent.
If you are an expert and have a spare machine for experimenting then you could try recompilings using more agressive optimizations than the standard -O2 or using a better compiler than gcc 2.7 like egcs or pgcc. However be warned that all 2.0 kernels until 2.0.35 and possibly 2.0.36 have some bugs who will break the kernel with any other compiler than gcc 2.7 (they work due to gcc 2.7 bugs). Also be wary about some optimizations like loop-unrolling who according to egcs or pgcc doc were never thorougly tested be in gcc, egcs or pgcc and that egcs and pgcc are not as well tested as gcc (egcs 1.0 was notorious for its FP bugs). Given these warnings there is a 7% speed difference between the Byte benchmarks compiled with -O6 and loop-unrolling against plain -O2. So playing with compiler and compiler flags is an interesting possibility if you are an expert: it could help the kernel developpers to determine what are the more agresive optimizations who don't break the kernel. If you are not an expert then don't lose sleep about this. The problem is that only a small part of the time spent by your program will be spent executing those parts of kernel code affected
Kernel recompiling for your specific processor gives only a minimal CPU boost when the kernel version is 2.0 and the processor is a 1998 or earlier model of the i386 architecture. This could change in future versions of Linux or when using newer processors.
Kernel compiling has been seen as the panacea for Linux optimization. Unfortunately this doesn't resist serious analysis. It also has two serious drawbacks. First it is poor public relations for spreading Linux between normal people. Second this has sterilized investigation of more effective optimizations.
Nearly every article that I have read in The Linux Gazette has been technical and/or practical, so let me apologize if this seems a bit "off topic." I am primarily an anthropologist, and as such have always been a bit more inclined to write about things more generally. Instead of the technical and practical, I want to wax philosophic for a bit on the subject of free software in general, and the Linux kernel in particular by "porting" a bit of my philosophy of life to the computer. I have tried to write these articles for both the newcomers to the Free Software Community (FSC) as well as for those who have been around a lot longer than I. I will not waste time on the definition of free software except to say that it is free as in freedom. For a definition, I would have the reader visit the GNU/Free Software Foundation website. The few facts that I intend to present will only be news to those unfamiliar with free software, while the philosophy- at least as seen from my vantage- will probably be new to all. My idea is to present what for the lack of a better term I call The Four Cornerstones to the Foundation of Free Software. These are the four main things that I consider vital to the Free Software Movement (FSM) in general, and to the Linux kernel in particular. They are, in no particular order: Doubt, cooperation, non-control (read: Freedom), and rebellion. I have chosen to break these up into a series, because it would be a bit long as one article. In each case, I will give an explanation of what I mean by the idea and an example of how it pertains to the FSM. I also offer the opportunity for discussion/argumentation if anyone cares to explore "Free Philosophy" further. To those few I invite the use of my email address at the beginning of these articles.
The first cornerstone that I will discuss is that of doubt. It is a very powerful and useful word, unfortunately, doubt has gotten a bad rap for no-good reason. When one thinks of doubt, they are almost certainly consumed with thoughts of lies, fear, and uncertainty. It is a dark word, and one that we rarely use in association with someone or something that we love. This is wrong. I believe that doubt, often pure, serious doubt, is absolutely necessary for any true love and exploration of a subject. I also think that if it were not for doubt- and the admission of self-doubt- we wouldn't have free software.
The FSC has a large share of doubt, and this has been one of its main strengths. We doubt that software will work properly, we doubt that it will work at all. We doubt that the code was written efficiently, we doubt that it couldn't be better. Most importantly we doubt that we, ourselves, have written it the best way it could have been written. This doubt, about our product and about ourselves, is the main strength of all free software. Do not misunderstand me on this point. I am in no way suggesting that we are "suspicious" of every program that we use, or that we build binaries expecting them to fail. What I am suggesting is that we do not consider the program "complete," in the sense that the code is unable to be improved or changed.
I'll give you two scenarios to illustrate my point:
Scenario one: I'm a guy who has been programming since I was twelve. I know that I'm a damn good (if a bit arrogant) coder. One day I finish a big program that is my masterpiece. I cried when I compiled this baby. Hell, I almost got divorced because of it! I have no doubt in my mind that this program is perfect! I would immediately punch anybody who said otherwise. So I market it. I box the binary and I ship it, knowing that I'm going to be the next Bill Gates. Soon, I find out that I am the next Bill Gates, after a fashion. My program locks computers from here to New Jersey. Not all of them, mind you, but enough to hurt sales and make people wonder. The bad thing is that I can't figure out why. Certain people didn't like it in the first place because it's big. Now, nobody want's it because it's big and buggy. Even though I tested the hell out of that program.
What I don't know is that some geek in Indiana has figured it out. He has two computers, and the program only crashes on one. It's the Pentium II with the BX chipset on the motherboard. It also crashes his friends LX chipset computer. I have a Pentium Pro, but everyone wants a Pentium II these days, and they all want that extra speed on the board. Suddenly people start realizing that my product (and probably my programming) isn't worth its salt. My masterpiece has failed.
Scenario two: Same guy, same program, same long fight with his wife. Is very sure that his program is perfect, but has just enough doubt (read: wisdom) to know that there is always somebody better. He has just enough doubt to realize that a program can be written in so many ways that his chances of using the best one in this situation are not 100% and his chances of using the only good one for every situation are pretty near 0%. So he offers his product as free software. He gives everyone the right to use it and modify it, hoping that no-one needs too, but knowing that many will do so anyway. Unfortunately, the program creates a nightmare for him by crashing every computer from here to New Jersey. In this scenario, however, there's a geek in Indiana who figures out the problem and writes a patch. Within weeks the patch has fixed the problem, and within months his program is ported to Alphas and Macs, something that he didn't even consider. His program is a success because he realized that he wasn't the one and only "God of programming." He had just enough doubt to temper his delusions of perfection.
Granted, this is a very simplistic situation, but it does highlight my main point. A lack of doubt, in every situation in life, leads to problems. Admission of doubt allows the possibility of another option, it is an opening, of sorts, to different ideas. To have absolutely no doubt is to become fanatical, and when one becomes fanatical, all options- all doors- close. All possibilities for change, or consideration of other methods are destroyed. Ironically, the fanatic's love for a subject eventually becomes its downfall. In the long term, and more radical situations, the very subject of the fanaticism is itself destroyed, because all thought that improvement or change could even be necessary are anathema to the fanatic's beliefs. Eventually, the subject of the fanaticism becomes something wholly different, and often counter, to its original purpose.
It's easy to see this closing of doors, options, and thought by looking at the worlds of politics and religion. It is also easy to see by looking at the world of proprietary software. Corel recently released its version of WordPerfect 8 for Linux, and has since been touting that the Linux community has a "desire for proprietary software," both on it's website and in the press. The company is so sure that its product is perfect, that it is just what the Linux community wants, that it was patting itself on the back just days after the program's release. I can only assume, knowing what I know about people and bureaucracy, that it laughs at any notion that the majority of the Linux community could possibly be silly enough to consider its program big and buggy, despite all the evidence to the contrary. The fact that, in the Linux community, "proprietary" is often a derogatory word, has never crossed their minds. My prediction is that they will continue to measure their "success" by the number of downloads, and not by the number of people who continue to use it on a regular basis. I suspect that many (myself included) downloaded it and almost immediately discontinued its use. The likelihood of a decrease in users is increasing because of good free software word processing programs and the continued growth in the appreciation of existing ones such as Emacs.
The FSC keeps doors open by holding on to that most important resource: Doubt. We are never happy or completely certain that something is "perfect," or that no-one else is able to improve on something. If it works, it is used and respected, but if someone, anyone, thinks that they could improve it- that's admired. We are also protected from the follies of proprietary software in another way. In the world of free software, KISS is the name of the game. The idea is often to Keep It Small and Simple (or my preferred version, Keep It Simple, Stupid). Here, the doubt is that a program that is a behemoth, with a lot of unnecessary fluff, is better than a small one which performs the same function, often more reliably. This is inherent protection from the delusions of grandeur that taint so many proprietary programs. Free software tends to keep its feet on the ground, instead of becoming the bloated dreams of a few hungry individuals.
Netscape recently learned of some of the benefits of the Free Software Movement when it released its code. Apparently, within days (perhaps hours) there was a group of Australian hackers who improved the code, increasing its security. This event was not only good for Netscape users, who have benefited from the increased security, but to Netscape as well. The company now has a better product to offer the consumers. The free software method offers a no-lose situation, and it guarantees success. The reason for this is the next cornerstone that I will be discussing: Cooperation. I will return next month to expound on that idea from the vantage point of my favorite linux soapbox.
muse:
|
elcome to the Graphics Muse! Why a "muse"? Well, except for the sisters aspect, the above definitions are pretty much the way I'd describe my own interest in computer graphics: it keeps me deep in thought and it is a daily source of inspiration. his column is dedicated to the use, creation, distribution, and discussion of computer graphics tools for Linux systems. |
This is a short issue of the 'Muse. I'm in the process of moving from Dallas to Denver so life has been rather hectic. But I didn't want to completely skip this month since last month was lost due to my hard disk crash in November. Details, details, details. I should have a few more articles next month. In the meantime, you can check out an interview I did originally for the December issue:
|
Alexander Larsson CgmVA is an applet that shows CGM files. CGM is a non-proprietary well known vectorgraphics file format. The user can zoom and scroll around the viewed image. CgmVA is scriptable with JavaScript. You can control up to 16 layers with several images in each layer. The images can be magnified and moved by the script or be controlled by the user with the mouse. Changes: text path supported,
full support for character orientation and alignment, all line types, all
marker types.
|
The Graphics Muse Tools are a collection of plug-ins, brushes, and patterns for use with the GNU Image Manipulation Program, more commonly known as the GIMP. The 0.1 release provides three plug-ins. ArrowGFX for creating arrows and pointers of varying types, CardGFX for creating business and greeting cards and TransGFX which is an alternative interactive rotation transform tool. Additionally, a collection of new brushes has been included. A set of patterns will be made available at a later date.
Download: http://www.graphics-muse.org/source/gfxmuse-0.1.tar.gz
Homepage: http://www.graphics-muse.org/sw/sw.html
tgif
is a vector-based draw tool, with the additional benefit of being sort
of a web-browser. That is, you can fetch drawings from a web server with
it, and you can make objects in your picture into hotlinks to other parts
of the drawing, or to other drawings accessible via http.
Homepage: http://bourbon.cs.umd.edu:8001/tgif/
LibGGI 2.0 BETA1 is finally out. LibGGI has been split into a library doing generic input handling called LibGII, and the "traditional" LibGGI, which takes care for handling graphical output to virtually anything used to display graphics on Linux or Unix in general.
ftp://ftp.ggi-project.org/pub/ggi/ggi/2_0_beta_1
For those who don't yet know,
what LibGGI is about and why you want it as well: LibGGI is an attempt
to unify all those graphical output systems that exist on Unix with possible
ports to other systems as well.
Introducing http://www.script-fu.org A resource for Gimp's Script-Fu programmers. Includes lots of tips on how to use script-fu, including how to run a script directly from within GNU Emacs.
--Zachary Kessin
http://www.script-fu.org
Xi Graphics has recently provided updates adding support for new laptops, new multihead boards and graphics boards or correcting problems in previous support. Updates may be applied to any Accelerated-X 4.1.2 Server on supported operating systems (BSD/OS, FreeBSD, INTERACTIVE, Linux, Open Server, Solaris/x86).
Accelerated-X (AX) for Intel
processors
URL ftp://ftp.xig.com/pub/updates/accelx/desktop/4.1.2/intel
D4102.028 Number 9 Revolution IV FPD + SGI 1600SW FPD
D4102.027 ATI update
D4102.026 Matrox Millennium G200 PCI/SDRAM
Accelerated-X (PX) for Alpha
processors (Red Hat 5.2)
URL
ftp://ftp.xig.com/pub/updates/accelx/desktop/4.1.2/alpha-processor
P4102.001 Matrox Millennium G200 PCI/SDRAM
Accelerated-X Laptop (LX)
for Intel processors
URL ftp://ftp.xig.com/pub/updates/accelx/laptop/4.1.2
L4102.021 Toshiba Satellite Pro 490XCDT (S3 ViRGE MX)
L4102.020 Cyrix MediaGX Laptop Mobo
L4102.019 IBM ThinkPad 770X (Trident Cyber 9397 DVD)
Accelerated-X Multi-head
(MX) for Intel processors
URL ftp://ftp.xig.com/pub/updates/accelx/multihead/4.1.2
M4102.008 Colorgraphics Predator 2/4 AGP (S3 Savage3D)
M4102.007 Matrox Quad Productiva G100 board
Each update has a gzipped
tarchive and a text file describing the update and the update procedure.
The INDEX file in each product directory lists all updates and pre-requisite
updates.
The Poor Mans Renderer is
a free simple 3D rendering/editing tool for LINUX.
The home page is http://borneo.gmd.de/AS/janus/new/pmr/pmr.html
There are several improvements
since Version 1.3
Metro Link proudly announces the early access release of Metro Extreme 3D for graphics cards using a single 3DLabs GLINT 500MX chip on a Linux/x86 operating system (glibc or libc5). Metro Extreme 3D is an SGI-compliant port of OpenGL which provides 3D hardware acceleration on specific cards.
This early access release, as well as the upcoming official release of Metro Extreme 3D, will be a free upgrade for all existing customers with a valid Metro OpenGL license. In addition, anyone who purchases Metro OpenGL will automatically get the official version of Metro Extreme 3D when it is released. Contact sales@metrolink.com to get your free upgrade or to purchase a new license.
Metro Link has created two newsgroups for discussion of this product and its subsequent releases. The public newsgroup is for customers and potential customers who want to stay informed of product development. The other newsgroup is private, for interaction with customers actually using the early access release of Metro Extreme 3D.
To join the public newsgroup, point your news reader to news.metrolink.com and look for metrolink.me3d.
To join the private newsgroup, contact sales@metrolink.com to verify your original purchase of Metro OpenGL and to receive a login and password required for participation in this group.
Metro Link's goal is to provide the highest performance and most robust software to the Linux/UNIX community. Metro Link provides mission critical X Window System and related software for many Linux/UNIX platforms. Our software has been proven in the Boeing 777, the Space Shuttle, the 767 AWACS, the Crusader Self-Propelled Howitzer, the Army Land Warrior and many other applications which demand high reliability.
Metro Link Incorporated
www.metrolink.com
sales@metrolink.com
...you can find a set of gallery images and source files created with AC3D at the User Pages for AC3D - http://www.eilers.net/ac3d/...you can find an interesting bit of news from Ton Roosendaal on the future of Blender on the Blender News and Chat page.
A: There a dozen ways to do this. Heres an example:
Q: Regis Rampnoux wrote: I have put an offer on my web pages to find a developper for a driver for Epson Photo Stylus Color printer and other with 5 ink cartridges like EX.
A: Michael Sweet replied: EPSON has released the information for 6-color printing so the next version of the print plug-in for GIMP will support it. As for GhostScript/other drivers, my company is in the process of porting our software to Linux and may also do a FreeBSD port.
Q: Any pointers/tutorials/utilities for making fonts?
A: xmbdfed - there is a link to a static binary for this at fonts.themes.org.
Q: That program creates pixmap fonts, but vector fonts? Anybody?
A: spif is a vector editor, which is unfortunately not available. See http://www.gh.cs.su.oz.au/~matty/Spif/ for info. Also, gfonted is available, but it doesn't do a whole lot yet. See http://www.levien.com/gfonted/ for details.
Zachary Beane xach@mint.net
I hope I have come to the right place, I found an old article of yours with a reference to the bttv video driver. I am fairly new to linux, but I do have a background in computers. If I have the wrong person, please let me know where I might find the answer to my question.'Muse: I don't have this card so haven't tried this yet, but I'll see what I can do.
I have been trying to install the bttv driver for a USRobotics BigPicture video capture card which I hear will work, but it is not the hardware support I am asking about. I have the source, a patch, and it says there is an application for putting the captured video on the screen that comes with the bttv source.'Muse: Patches are applied to the source prior to running "make". To apply a patch you use the "patch" command, usually something like this:I do a make,
Then make install.
Now what? When and how do I patch it, and isn't there supposed to be a kernel recompile involved? There is no choice for installing the module in a make xconfig now.
% patch < patchfilewhere patchfile is the name of the patch file. You usually have to be in the directory where the source code is or (if there are multiple directories in the source code distribution) in the top level directory.
After applying the patch you run "make". "make install" will (if the distribution supports this) install the binaries in one of the common binary directories, such as /usr/bin or /usr/local/bin. Often you can specify where these files will be installed either by editing the Makefile, a configuration file of some kind (config.h for example) or specifying a command line option if the distribution uses a "configure" script. It doesn't sound like the bttv distribution uses configure since you didn't mention it. Also, it doesn't sound like make install worked since the application didn't get built either.
As to recompiling the kernel, I doubt it. Linux supports loadable modules but not all drivers have to be part of the kernel. A good example of this is the X server, which drives graphics hardware but is not part of the kernel and is not a loadable module. Chances are that the bttv driver has an application that works with the driver to directly drive the video hardware without kernel intervention.
The application doesn't seem to have been automatically compiled.'Muse: It may have to be built seperately. Its hard to say without looking at the distribution source directly.
Please help a newbie try to get drivers up for his hardware.'Muse: Did the distribution come with a README or some other text file explaning how to build it or at least how to contact the author(s)? You might try contacting the author(s) if they gave their email or Web address. If that doesn't work you might try a local Linux User Group (you can usually find one via SSC's web pages @ www.ssc.com or www.linuxresources.com).
I plan on looking at this and other video and TV cards for my Muse column but it won't be for a while. Hope this helped a little.
No Web Wonderings this month.
I'm busy moving back to Denver and didn't have time to research anything
interesting. But I should have something for next month.
|
'Muse: Tell us a little about yourself. How did you get involved with printers?
M.S.: Back before I went to college I started fooling around with printing stuff on dot-matrix printers (EPSON, Radio Shack, etc.) This eventually led to color printing on an old HP DeskJet 500C and my second shareware program, "Image Master" (not the PC version, this was for a Color Computer).
Later I did a freeware program for IRIX called "topcl"; it was about this time that I started a software company (Easy Software Products) with a friend of mine to sell printing and 3D modeling software.
I guess my motivation all along has been to get what I have on the screen of my computer (pictures, computer graphics, etc.) printed out.
'Muse: What can you tell us about the current printing solutions available for Linux? How do the commercial solutions differ from using the stock "lpr" system?
M.S.: The current printing solutions are pretty primitive compared to the typical MacOS/Windows environment. PostScript printers are pretty well supported, however accessing specific printer features is usually difficult, if not impossible.
The standard print drivers shipped with the commercial Linux distributions (Red Hat, etc.) support printing of text and PostScript files. Support for non-PostScript printers is limited to the available drivers for GhostScript.
Currently there is only 1 commercial printing solution that I know of - PostShop from Vividata (http://www.vividata.com). Besides supporting PostScript and text files, they also support a number of image file formats (JPEG, GIF, etc.) and PDF (Acrobat) files directly. PostShop for Linux uses the Alladin GhostScript 5.10 drivers for non-PostScript printers.
Another commercial driver package that will be available soon from my company is ESP Print. Like Vividata, we support a lot of different printers and file formats. The main difference is that we are also providing a new printing system that replaces the existing system (typically LPD or LPRng) with the Common UNIX Printing System (CUPS). CUPS uses the Internet Printing Protocol (IPP) and supports printer browsing, making it network-friendly. Also, CUPS supports job-specific options (something that LPD-based solutions do not) so that you can select different media sizes, type, trays, etc.
'Muse: What is IPP and how does it relate to Linux?
M.S.: IPP is the Internet Printing Protocol, which is slated to become the next network printing standard. Vendors including Xerox, Hewlett Packard, and Microsoft are adding IPP support in their next generation of products, so having IPP support in Linux is important.
'Muse: Are you familiar with the recent InfoWorld article announcing the Universal Printer Driver Format (UPDF)? If so, what can you tell us about this and how might it relate to Linux? (http://www.infoworld.com/cgi-bin/displayStory.pl?981024.ehprint.htm)
M.S.: UPDF looks similar to Adobe's PostScript Printer Description (PPD) specification, just extended to support any printer language.
It would be interesting if they actually pull this off, however I know from experience that it will be difficult for anything but "standard" printers (e.g. PostScript and PCL). Most of the entry-level printers shipped these days use proprietary command sets and many reduce the manufacturing costs by implementing printer functions in software rather than hardware.
As for Linux support, it's too early to say...
'Muse: Wow. Lots of new acronyms for us printing-novices. So how does IPP relate to the use of UPDF or even PPD? It sounds like we'll be using IPP to send printer description files to printers. Does this mean IPP is how we'll talk to printers and UPDF is what we'll be saying?
M.S.: PPD and UPDF control what a print driver or application will send to the printer while IPP provides a standard protocol (via HTTP) for sending those jobs to a networked printer or server. It is likely that an IPP printer or server will provide the PPD or UPDF file to a printer driver or application via HTTP, something like:
http://myprinter.domain.com:631/printer.ppdor:
http://myprinter.domain.com:631/printers/QueueName.ppd [CUPS does this]Keep in mind that PPD, UPDF, and IPP are all separate entities and can operate independently. IPP, for example, is currently only a network printing protocol and would not apply to printers connected to a local port (e.g. parallel port).
Also, a big question is how a printer will be "discovered" on the network so drivers and applications know to use the IPP protocol. Currently there are dozens of "standard" protocols, known as Directory Services, for this kind of thing. IPP doesn't mandate any particular directory service, and right now work is underway to update SNMP (Simple Network Management Protocol), LDAP (Lightwight Directory Access Protocol), and SLP (Service Location Protocol) to handle the needs of IPP, specifically the URL/URIs to use for the printer. CUPS will be using its own protocol until things settle down and we see which protocol(s) are most commonly implemented.
'Muse: You wrote the Print Plug-In for the Gimp. What was your motivation for doing this?
M.S.: When I started using GIMP to retouch some of my photos, I noticed there wasn't a way to print yet. I ended up adding support for most of the entry-level inkjets, mostly because Linux user's didn't have any other option.
'Muse: Did it take you long to write the first version of the plug-in?
M.S.: It took about 4 days to get the first version up and running. The output was OK, but the user interface left a lot to be desired. The current release amounts to maybe 100 hours worth of work.
'Muse: What sort of problems did you encounter while writing the plug-in?
M.S.: The biggest one (one that is still causing problems, in fact) is dealing with different printing systems. Each UNIX vendor uses a different spooler, so I had to put a lot of extra code in the plug-in to deal with it.
'Muse: How do you see this plug-in evolving with the Gimp? Will there need to be any major changes for the 1.2 release?
M.S.: GIMP 1.2 (and the 1.1 development version) adds support for different color spaces and resolutions. This will require quite a bit of "retooling" in the plug-in to handle this. The new versions of GIMP will also support physical resolution information, so if you're editing a 300 DPI image the print plug-in will need to handle that for scaling...
'Muse: I recently wrote a number of plug-ins, one of which could definitely use a direct interface to the Print plug-in. Do you have any tips for plug-in authors who would like to call the Print plug-in directly? Or do you recommend this not be done?
M.S.: It can be done through the PDB interface, however I would definitely use the interactive mode of operation. The non-interactive mode prevents users from selecting the printer and/or options they want.
'Muse: I noticed the margins in the Print dialog could only be set to 0 if you use the PPI setting. Is that intentional or was it possibly user error? I was trying to print a large document, 8.5"x11" at 360 DPI and didn't want the print plug-in to add any margins on its own.
M.S.: That's intentional, as it knows what the printable area is on the printer. If you have a so-called "full bleed" printer, the print plug-in will allow you to scale to the full size of the page.
'Muse: Does the Print plug-in now, or will it in the future, work with the commercial printing solutions?
M.S.: Yes, it already works with any software that uses the lp/lpr spooler interface. A future release of the plug-in will take advantage of printer information supplied by CUPS as well.
'Muse: What tips would you have for a novice user who is trying to decide on a new printer? What should they look for?
M.S.: Before they start looking they need to answer a few questions:
The easiest printers to connect to a UNIX system are PostScript printers. These usually cost more than non-PostScript printers, but don't forget to figure in the cost of driver software with your choice.
'Muse: Aren't most ink jets non-Postscript printers? I though Postscript printers were all laser printers.
M.S.: There are a number of PostScript inkjet printers; HP's DeskJet 1600CM and DesignJet plotters have PostScript options, as well as inkjets from Tektronix, EPSON, Calcomp, Xerox, etc.
There are also a number of PostScript printers using alternative technologies, like Tektronix's "solid ink" based printers, dye-sub printers, and so forth.
It's possible for *any* printer to have built-in PostScript, however this generally raises the price of a printer. You also have to be careful about how the PostScript capability is implemented. For example, EPSON offers PostScript printing options for their Stylus Color 800 through 3000 printers, however these are all software RIPs and not built into the hardware of the printer. Only the Stylus Pro 5000 has a hardware RIP (made by Fiery, a very big PostScript RIP vendor).
'Muse: Do you see integration coming between printing on Linux and the two leading desktop choices, KDE and GNOME? If so, when do you think this might be available? Do you expect drag-and-drop printing options?
M.S.: Until there is a non-commercial version of Qt I don't see KDE and GNOME coming together. Qt is the source of many flame wars on newsgroups and mailing lists, and the desire amongst Linux users for free software is strong. There is work in progress to make a LGPL'd version of Qt available, so it is likely that some common method for drag-n-drop will be adopted for both desktops.
This will also require a standard printing system, and I'm hoping that CUPS will fill that need...
'Muse: What about professional (re: business) users - what should they look for when print quality is more important and usage is likely to be much greater?
M.S.: I'd still stick with those three questions. If you are sharing the printer over your LAN I'd definitely look at getting a network card with the printer.
Question #3 is very important for business users; trust me, if you exceed the monthly use rating for a printer it *will* fail more rapidly.
If price is a concern, look for printers that can be expanded/upgraded down the road. Hewlett Packard has several good laser printers (color and B&W) that meet this criteria.
'Muse: Any other thoughts on printing?
M.S: Printing under UNIX currently lags behind Windows/MacOS in a number of important ways:
Online Magazines
and News sources
C|Net Tech News Linux Weekly News Slashdot.org General Web Sites
Some of the Mailing Lists
and Newsgroups I keep an eye on and where I get much of the information
in this column
|
Let
me know what you'd like to hear about!
A Linux Journal Review: This article appeared first in the February 1998 issue of Linux Journal. I decided to reprint it here because most of you who write letters to LG don't seem to know this handy command exists. While it's not mentioned in the article, ispell can be used from elm and other e-mail packages.
As a former Technical Editor, I know how easy it is to miss incorrect spelling when proof-reading, especially if the word ``looks'' right, e.g., compatability (sic). For this reason, a good spelling checker is a must. The command ispell does a good job and has special features to help it do even better. The Man page for ispell is very comprehensive, so I won't go into all its options--only my favorites.
When ispell has been invoked and it finds a misspelled word, options are displayed across the bottom of the screen:
[SP] <number> R)epel A)ccept I)nsert L)ookup U)ncap Q)uit e(X)it or ? for helpAll you have to do is press the space bar (accept this time only) or A (accept for rest of document) to accept the spelling as is, press I to insert the word in the dictionary, or press the appropriate number or R to replace it. The main thing to watch out for is the right time to use R. When a misspelled word is found and the spelling choices are offered, the tendency is to press R for replace and enter the number of the correct choice--doing this results in the number replacing your word. Instead, enter the number of your choice immediately, and since replace is the default, the correct spelling will replace the incorrect in the text. Use R only when a correct spelling is not offered by ispell.
Most of SSC's reference cards and command summaries use troff text formatting; other manuals use TeX. Use the option -n with troff text or -t with TeX or LaTeX, and ispell will ignore formatting commands, thereby returning fewer ``misspelled'' words for you to accept. While an option is not available to designate a Quark file, you can always insert the QuarkXPress formatting commands into your personal dictionary the first time they come up and not be hassled again.
In fact, the personal dictionary is probably the neatest feature of all. The very first time you select I to insert a word it doesn't recognize, ispell sets up a personal dictionary named ispell_english in your home directory. After that, any word you select will be added to this dictionary, and you will never be told it is misspelled again. This feature is particularly handy for proper names, buzz words and abbreviations unique to your business. Hashed dictionaries for other languages (that have been installed) can be specified using -d. In addition, you can set up special dictionaries for particular projects. For example, when I was editing the Java Reference Cards, I set up a special dictionary named ispell_java just for Java terms in my work directory. Afterwards, whenever I ran ispell, I specified the command line as:
ispell -n -p ./ispell_java java.troffAs a result, ispell knew class names like getFontList were spelled correctly, and that getFontlist was not. By the way, don't forget that the command line specification must include the directory of the dictionary (./ in the above example); otherwise ispell will look for it in your home directory.
Another handy feature to remember is how to check a single word instead of a complete file by using the -a option. For example, if you specify:
echo compatability | ispell -aispell will return the message:
&compatability 3 0: comparability, compatibility, computabilityThis message tells you ``compatability'' is misspelled, and gives you a list of 3 best guesses in alphabetical order. If you prefer not to have the list sorted alphabetically, use the -S option, and it will be sorted by best guess.
All in all, ispell is an effective and easy-to-use all-purpose spell checker.
Why would we want to create a FAT16 partition during the first step of the installation process if at the end we want an NT workstation with NTFS? The DOS programs that we run during the installation process (steps 2 and 3 above) need to write data to the hard drive, but DOS programs cannot write the NT filesystems. The first program that needs to write the hard drive is our custom-made DOS program that finds the MAC address of the NIC and creates a uniqueness database file (UDF). This template file drives the NT installation program. The second program that writes to the hard drive is the Windows NT installation program itself; the NT installation program copies the operating system files from a network drive to the local hard drive. Then the FAT16 partition is converted to the New Technology File System (NTFS).
We needed a better, faster, and error-proof way to re-partition and format the hard drives in the workstations. One option was to use a disk-copying program to copy disk-images onto the hard drive. For example, I could partition a hard drive and set it up just as I wanted it, and then take a snapshot of it (basically, dd if=/dev/hda of=image). During the installation procedure, I would copy the appropriate disk image on a workstation's hard drive (dd if=image of=/dev/hda). This method would have worked, but we would have needed images of every uniquely-sized hard drive that we wanted to deliver to our users. We really wanted a solution that would work on any hard drive, regardless of its size. It would be nice to have the solution work right away on any new hard drive we happened to get from our hardware vendor.
What fits on a single boot disk, gives you low-level access to the hardware, and gives the programmer the most tools to get the job done? Linux, of course. I knew that with a bit of work I could create a Linux program that would partition hard drives exactly as we needed them and avoid the need for human intervention.
I took the boot-disk from the Debian installation disk set and modified it. The diskette boots Linux and loads a compressed root file system to a RAM disk. The program /sbin/dinstall, which used to be the Debian installation script, starts automatically. This short script, which is now my auto-fdisk script, sends keystrokes to the STDIN of fdisk. First the script learns the number of cylinders that the hard drive contains, by capturing the output from fdisk -l. The cylinder count is then used as input to a second run of fdisk in order to create a single FAT16 partition that spans all the cylinders of the hard drive. (cylcalc and fixbs are 2 programs called by dinstall.)
After the drive is partitioned correctly, mformat from the Mtools collection is used to format the hard drive as FAT16. Mtools (http://gwyn.tux.org/pub/knaff/mtools/) is a collection of programs that allows Unix users to manipulate FAT media from user-space. That is, no mounting of the file system is done. The mformat program is great because it assumes that the medium is already low-level formatted; it writes just the boot sector and two copies of the FAT (file allocation table). In no time at all it creates the minimal number of pieces required for a FAT file system. The DOS format program spends more than a minute formatting a diskette; mformat does it in just a few seconds.
The time spent booting this Linux auto-fdisk diskette and re-partitioning and formatting the drive is between 1 minute and 1.5 minutes, depending on the speed of the CPU of the workstation. Five seconds of this time is spent re-partitioning and formatting the drive; the rest of the time is just the boot process. Compare this 1-minute run-time to the 10-minute run time of our old method, using Partition Magic. Not only does Linux let us prepare the computer in one-tenth the time, the computer is prepared correctly every time, with no possibility of human error. Saving about 10 minutes on 2000 computers saves us over 13 days over the time of the NT-rollout.
When our NT-rollout started, the computers came from our vendor with 1.2 GB hard drives. We easily created single FAT16 partitions on these hard drives, which our automated NT installation then converted to NTFS. Every user had a C: drive that spanned the entire hard drive. After a few months of the rollout, our vendors started to supply us with 2.4 GB hard drives. Since our FAT partitions were made from DOS, the partitions were limited to 2 GB. After the conversion to NTFS, the users had a 2 GB C: drives! We could have given the users a D: drive to use the rest of the space on their hard drive, but we worried that if users moved from computer to computer, the appearance of C: on one computer and C: and D: on another would confuse them. We decided to avoid confusion and create workstations with only C: drives. The workstations with the new 2.4 GB hard drives were delivered to users with 400 MB of un-used, wasted space. This was a hard decision for us to make, but it was the best decision at the time.
We tried to use the ExtendOEMPartition flag (http://www.ntfaq.com/ntfaq/install.html#install29) in the unattended installation file to make NT use all unallocated space on the hard drive when converting the FAT partition to NTFS. This flag tells the NT installation program to grow the NTFS boot-partition to the extent of the unused space on the hard drive. However, setting this flag caused the NT installation program to pause and prompt the user for a keypress to continue, making our unattended installation attended. The ExtendOEMPartition flag was unusable for us. We recently have learned that there is a fix which involves extracting a file from Service Pack 3 before running the unattended NT installation (http://support.microsoft.com/support/kb/articles/q143/4/73.asp), but that solution was not available to us at the time. Not having a solution from Microsoft, we made our own. The Service Pack 3 fix only creates large NTFS boot-partitions for unattended installs. Our homemade solution creates large NTFS boot-partitions for both manual and unattended installs.
The solution to our problem lies in one key point. A filesystem is a data structure within a partition, whereas a partition is a chunk of the hard drive. Although the terms ``FAT partition'' and ``FAT filesystem'' are commonly used interchangeably, they are not the same. A FAT partition is simply space carved out of the hard drive, reserved for use by a fileystem. The only reason the partition can be called ``FAT'' is because the partition type, as identified in the partition table stored on the hard drive, is type 6, which is BIG-FAT16. That's the only ``FATtiness'' of a FAT partition.
A filesystem is the collection of structures that organize data inside a partition, and the data itself. A File Allocation Table in a FAT filesystem is a structure that acts as a table of contents, identifying where files are stored on the disk. Filesystems, FAT and non-FAT, are usually created to fill the disk partition in which they reside (what would you want to put in a partition next to a filesystem anyway?), but technically they don't have to be built that way. No commercial tools that we are aware of will allow you make filesystems that are smaller than the disk partition in which they reside, but it is possible to create such a filesystem. The trick is to tell mformat, from the Mtools collection, that the disk partition is smaller than it actually is.
mformat was designed to format floppies. It can also format hard disk partitions, but to do so it needs to be told all the geometry of the partition (cylinders, heads, and sectors). Since I want a filesystem smaller than the hard drive partition, I lie to mformat. I don't tell it the true number of cylinders that the partition uses; I only tell mformat about enough cylinders to make a 500 MB FAT filesystem. (I really only need about 220 MB for the NT installation, but I make 500 MB just in case). mformat dutifully makes a 500 MB FAT filesystem within my much-larger FAT partition.
Version 3.8 of Mtools contains a small bug in mformat when it is used on hard disks. The number of directory entries, which is a field in the boot sector of the FAT filesystem, is not written correctly. Less importantly, the jump vector is also slightly incorrect. I say that this is less important because this FAT filesystem won't be bootable, so the jump vector won't be necessary. To fix these small problems, a very small C program is run to fix the boot sector. This was easier than trying to fix mformat. Version 3.9 of Mtools is now out, but I do not know if this bug was fixed.
I then boot into DOS. By running chkdsk, I see that DOS sees it's C: drive as being 500 MB in size. By running fdisk, however, I see that DOS knows that the only partition on the hard drive is 2.4 GB. This is quite a unusual situation, and perhaps the only time you'll ever see such a configuration. At this point, steps 2 and 3 from our installation process run. Files are created and stored onto this 500 MB FAT filesystem, and the NT installation program begins. After copying 220 MB of operating system files to the C: drive, the computer reboots and the NT installation program resumes from the hard drive, converting the FAT filesystem to NTFS. When the FAT-to-NTFS conversion program runs, it converts the 500 MB FAT filesystem to NTFS, but continues converting to NTFS to the end of the partition. We end up with an NTFS filesystem that fills the partition, no matter how big the partition is. Our users now have 2.4 GB NTFS C: dries.
The ability of the filesystem-conversion program to convert to the end of the partition was pure luck for us. It didn't have to do this. But the FAT-to-NTFS conversion program that comes with Windows NT reads the FAT filesystem size and the partition size as different measurements. It knows that it has to keep converting the rest of the partition, even when the FAT filesystem is much smaller than the size of the partition. This is a feature that is undocumented by Microsoft.
This trick we play in the FAT partition works equally well for manual installations of Windows NT. We have used this procedure for 1.2 GB, 2.4 GB, and 6.3 GB IDE hard drives, for both manual and uattended installs, with no problems. We stress-tested the filesystems on five different computers that were prepared this way. A program we wrote abused the filesystems on these computers over the course of a weekend, 24 hours a day. None of the filesystems had any problems then, and months later have not had any NTFS-related errors. Now that this procedure is being used in our NT rollout and a few hundred NT computers have been prepared this way, we have seen no NTFS corruption whatsoever.
Copyright © 1998, 1999 by Ron Jenkins. This work is provided on an "as is" basis. The author provides no warranty whatsoever, either express or implied, regarding the work, including warranties with respect to its merchantability or fitness for any particular purpose.
The author welcomes corrections and suggestions. I can be reached by
electronic mail at rjenkins@qni.com,
or at my personal homepage: http://www.qni.com/~rjenkins/.
Corrections, as well as updated versions of all of the author's scribbles
may be found at the URL listed above.
NOTE: As you can see, I am moving to a new ISP. Please bear with me as I get everything in working order. The e-mail address is functional; the web site will be operational hopefully around mid January or early February.
SPECIAL NOTE: Due to the quantity of correspondence I receive, if you are submitting a question or request for problem resolution, please see my homepage listed above for suggestions on information to provide.
Operating Systems Covered/Supported:
Slackware version 3.6
RedHat version 5.1
Windows NT Server version 4.0
Windows NT Workstation version 4.0
I only test my columns on the operating systems specified. I don't have access to a MAC, I don't use Windows 95, and have no plans to use Windows 98. If someone would care to provide equivalent instructions for any of the above operating systems, I will be happy to include them in my documents.
Part Six: Building an Internet Gateway
After much rewriting and testing, we will hook our home network up
to the Internet, using a Linux machine as an Internet gateway/proxy server.
The Linux machine will automatically connect to your ISP at boot time, configure itself, and re-establish the PPP link automatically in the event of a line failure. I will NOT be covering a dial-on-demand (diald) setup in this column, that will be covered next month in the advanced configuration and performance tuning column.
At the conclusion of this installment, you should be able to access the internet from any machine on your network, send and receive e-mail, (subject to the restrictions of the type of ISP account you possess) surf the web, and most any other darn thing you might want to do.
As with each installment of this series, there will be some operations required by each distribution that may or may not be different in another. I will diverge from the generalized information when necessary, as always.
In this installment, I will cover the following topics:
* Some background information on Internet gateway services.
* Advantages and disadvantages.
* Required hardware and software.
* Pre-installation planning.
* Setting up the PPP Interface.
* Setting up the NIC.
* Monolithic vs. modular approach to gateway services.
* Recompiling the kernel for gateway services.
* Testing the gateway machine.
* Configuration of the client machines.
* Testing the client machines.
* Troubleshooting the installation.
* Some notes and tips on particular services.
* Example rc.local scripts.
* References.
* Resources for further information.
* About the Author.
Quick Review of previous material and assumptions relevant to this
column:
Briefly, at this point, we have a three node network, all configured
with reserved 192.168.1.x IP addresses, using a common hosts files for
name resolution.
The gateway machine will be called gateway01.home.net, and will have the IP address of 192.168.1.1.
It is assumed that the gateway machine has a standard, non Plug and Pray modem (or has the capability to disable the PNP features and manually set the COM port and IRQ values,) installed either internally or externally.
NOTE: I have received many requests for the inclusion of 56K V.90 modems, ISDN modems, and cable modems in this document.
The ISDN modem's line provisioning and setup are beyond the scope of the document. However, if it connects using a serial port or network interface, there is no reason you should not be able to make it work. I have an Ascend Pipeline 50 myself, and have always had great success with it.
Concerning 56K V.90 internal modems, it is my understanding that these are at best a telco interface and impedance matching device, with the bulk of the work performed by software and your CPU. As far as I know these will not work with Linux.
If you have an external 56K V.90 modem, and it will accept the Hayes command set, give it a try. I would be interested to hear from you concerning your experiences with the external models.
Finally, concerning cable modems, I don't have access to one, so I don't know much about them. See the Cable Modem MINI HOW-TO. One bright note is that since these devices connect to your computer via a NIC, your configuration process will be much simpler than what we will be doing here.
It is assumed you know the relevant information for your particular ISP. At a minimum, you should have the following:
Access phone number
Fully Qualified Domain Name (FQDN) of your mail and news servers.
The IP addresses of your Primary and Secondary DNS servers.
Your subnet mask (usually 255.255.255.0.)
For more information on this subject, see my November column, or the ISP Hookup and Connectivity HOW-TO's.
Some background information on Internet gateway services:
People always say "You can't get something for nothing." Well, in a
sense, that's exactly what we are going to do this time. We are going to
use a standard, non-dedicated, and inexpensive dial up account to provide
Internet access for our entire network.
To accomplish this, we will be using the IP Masquerading software in conjunction with a firewall application (ipfwadm), as well as a NIC, modem, and what I call PFM - Pure Freakin' Magic.
Simply put, our machine will be performing two major functions. It will be acting as an Internet gateway, while simultaneously masquerading local IP addresses from the outside world.
The gateway function is fairly straightforward. A gateway does nothing more than connect two disparate networks, and make sure that all the traffic passed through the gateway reaches the proper destination.
The masquerading function, sometimes called Network Address Translation (NAT,) is a bit more complicated.
Basically, it is a programmable liar. What a masquerade program does is take the requests from all the machines on our local (home) network, and lie to the rest of the world, about the source of the requests, making it appear that they all originate from the gateway machine.
Conversely, when requests from the outside world come in, the little stinker grabs the requests and lies some more, then delivers the request to the proper user on the local net.
There is a lot more to it than that, but for the purposes of this project we will proceed with this explanation.
Advantages and disadvantages:
Advantages:
* You get to hookup up your whole network to the Internet for $18.00
per month, as opposed to as much as $300.00 for a dedicated ISDN connection.
* You do not need to purchase a domain name, configure name servers, and all the other administrivia that goes with a commercial installation (although much of what you will learn and do here will be applicable to such an installation.)
* Indeed, our configuration and installation in this project will, in
many ways, be more intricate than a simple commercial installation. This
will give you not only a home network for a reasonable price, but a marketable
skill.
* If there are only two or three people on it doing e mail, web surfing
or telnet, it should provide acceptable performance.
Disadvantages:
* Some ISP's are less than thrilled if you set up something like this.
Although you are still using just the one dial up connection, they , like
most corporate people I approach about telecommuting from home, think there's
just something wrong with it. It is possible you could be asked to get
a business type dedicated account, or your account may be canceled.
* Depending on the type of account you have with your ISP, you most
likely have only one e mail address. This means only you can receive e
mail with this setup. Some ISP's are beginning to offer "family accounts"
with extra e mail addresses available for a small extra monthly charge.
* While everyone on the network can surf the WWW, perform FTP, Telnet,
and many other applications, there are some things you will not be able
to do. See the IP_Masq document mentioned below for a complete listing
of supported and unsupported services and applications.
* Depending on the type of connection you use for your PPP link, performance
can be really poor. Although there are some things you can do to improve
performance and speed things up on a slow link, (More on this next time,)
after a week or so of a 28.8 or 33.6 modem connection, you will be dreaming
of an ISDN or Cable Modem connection.
* This sort of setup does NOT do outbound services well at all. Since
you are most likely using Dynamic IP Addressing, where you are assigned
a different IP each time you connect, it's very difficult and not very
practical to try to provide outbound services. You would be better served
with a dedicated connection, or some co-hosted web space on your ISP's
server if you plan to do any business with this setup.
Required hardware and software:
RedHat - Accept the defaults, and additionally select Dialup Workstation,
Networked Workstation, and C Development tools and libraries.
You may also want to consider adding Mail/WWW/News Tools, DOS/Windows Connectivity, NFS Server, SMB (Samba) Connectivity, Anonymous FTP Server, or anything else you require for your particular installation.
As below, skip APACHE, INN, and BIND. When prompted, go ahead and set your local network information. Leave your nameserver and gateway prompts BLANK.
You don't really get a choice of kernels here, so accept the default, and when prompted, be sure to make a bootdisk.
Finally, install LILO on the first superblock of the install partition, DO NOT INSTALL LILO IN THE BOOT SECTOR AT THIS TIME!
Reboot, and you should be connected to your home.net. Copy the common hosts file onto the gateway machine, as well as the other files specified last month.
Slackware - Install the A, AP, D, and N series. Chose the menu selection method of installation. Do NOT install APACHE , INN, or BIND. When prompted, go ahead and set your local network information. Leave your nameserver and gateway prompts BLANK. Finally, choose the proper vmlinuz kernel for your system.
When asked if you want to make a bootdisk, answer yes. Make several simple vmlinuz bootdisks. Do not install LILO at this time.
Reboot, and don't worry when it freaks out about not being able to find the network. Jump down to the setting up the NIC section and follow the instructions there, and reboot again.
Pre-installation planning:
Make sure you have the aforementioned ISP info handy.
If possible, try to get someone else involved in the project.
It is much easier to diagnose, test, and troubleshoot with someone else at the workstation and you at the gateway.
Make sure the ipfwadm software is installed on the gateway machine. This is not a problem in Slackware, but depending on what you choose when you install, it may not get installed in RedHat. If necessary, install it using glint or by hand:
rpm -ivh <nameofipfwadm.rpm>
Setting up the PPP interface:
RedHat - In text mode, you can either use the linuxconf utility, or
configure it manually. Under X, use the Control Panel/Networking/Network
Configurator utility.
Slackware - Here you have to do it manually. The down side is it's a bit more difficult, but the up side is in case of a problem, you will have a lot better idea of where to look to fix it.
Regardless of which flavor of Linux you are using, the following things will need to be done on either machine:
* Add your ISP's Primary and Secondary DNS servers IP addresses to you
/etc/resolv.conf file. This is identical for both distributions.
* Add and configure the ppp0 interface, activate it at boot time, make
it your default gateway device, and have it set your defaultroute. Finally,
you will need to configure the ppp0 interface to automatically redial on
link failure.
RedHat - Open Network Configurator, click on the Interfaces tab, select Add, then follow the prompts of the Network Configurator to set the above options.
Additionally, select the Routing tab, and check the Network Packet Forwarding option. To finish up, make sure the Default Gateway: is empty, and the Default Gateway Device: is ppp0. Select Save, then Quit.
Slackware - You have two options here - you may use the pppsetup utility that comes with Slackware 3.6, or you can script it yourself as described in the troubleshooting section.
I can only recommend the "script it yourself" method, as my experience with the pppsetup method met with mixed results. When used as an end user program, (after login and initiated by hand, it worked well.) When used at boot time, called from the rc.local file, sometimes it would connect, sometimes not.
To use the recommended scripting method, proceed to the troubleshooting section, create and test the scripts, then edit your rc.local file to call the unicom BEFORE the ipfwadm stuff.
If you do use the pppsetup method, be sure to read the docs and insert the line ppp-go in your /etc/rc.d/rc.local file BEFORE the ipfwadm stuff.
Concerning auto redial - there is a great little program for this, called pppupd, available at:
ftp://metalab.unc.edu/pub/Linux/system/network/serial/ppp/pppupd-0.23.tar.gz
Unpack it: gunzip -dc pppupd-0.23.tar.gz | tar xvf -
Look at the README file for complete compilation instructions, but in a nutshell, copy, then edit the pppupd.cf.template file to match your system.
You will have to provide the path to the pppsetup scripts, or the script described in the troubleshooting section, the time interval between pings, as well as a hostname for the program to ping.
Next, simply open the Makefile and look for the line:
CONFIGFILE=
And set it to the path of the pppupd.cf file you created earlier.
Finally, enter the command "make" at the command line and you will end up with the pppupd binary. Copy it to your /sbin or /usr/sbin directory.
You can start this at boot time if you desire by adding the line :
pppupd > /dev/null to your rc.local file, but I would be cautious, as during testing, this intermittently caused some freaky things to happen. I recommend starting it by hand at first, then if all goes well put it in your rc.local file at some point after the ipfwadm stuff.
* Enable IP Forwarding in the kernel at boot time. This should already
be activated on the Slackware box. To make sure, issue the following command
- cat /proc/sys/net/ipv4/ip_forwarding. This should be set to the number
one (1.) On the RedHat box, edit /etc/sysconfig/network, and change the
line : FORWARD_IPV4=no to yes.
* Edit your /etc/rc.d/rc.local file to instruct the machine to masquerade
for the rest of the network. Again this is the same for either distribution.
There are probably many better ways to do this, but here's what works for
me:
* Open /etc/rc.d/rc.local, and uncomment or add the following lines
(as necessary,) in the following order:
1. ipfwadm -F -p deny #deny everyone not listed below
2. ipfwadm -F -a masquerade -W ppp0 -S 192.168.1.0/24 -D 0.0.0.0
The previous line, number two (2), activates masquerading, and specifies the ppp0 interface as the default gateway for the home network.
Setting up the NIC:
RedHat - This should have been done during the installation of the
software. If not, in text mode, you can edit /etc/config.modules, or use
the linuxconf utility. If you have X up and running, You can use the Control
Panel/Networking/Network Configurator you used before for the PPP interface.
Slackware - Provided you have a supported NIC (you have been listening to me haven't you?) to setup the NIC, Change directory to /etc/rc.d/rc.modules, and uncomment the appropriate line for your NIC by removing the pound (#) sign at the beginning of the line. You may or may not have to pass some configuration information here such as the IO port and/or IRQ of the NIC.
In either case, be sure to leave the NAMESERVER and DEFAULT GATEWAY dialogs blank.
Monolithic vs. modular approach to gateway services:
You have two options for providing gateway services on a UNIX box -
a monolithic kernel (one with all the drivers and required support compiled
as part of the kernel itself,) or a modular approach (in this method you
use your standard kernel, and load or unload the required drivers and services
as needed.)
There has been about as many wars over this issue as the emacs vs. vi debate, so here's my two cent's worth - I use the modular approach, mostly because it makes for a smaller, leaner kernel, and most importantly, I'm lazy ;-)
Since the new kernels already have support for ip_masquerade, ip_forward, and ipfwadm built in, why go to all the extra trouble to compile a new kernel?
Sure, some of us get off on tweaking and tuning our installations continuously, the purpose of this series is to get you up and running with a minimum of fuss.
Recompiling the kernel for gateway services:
This is not necessary if you are using Slackware 3.6, RedHat 5.1 or
above.
If you are a masochist, kernel compilation instructions can be found in the Kernel HOWTO, and the required parameters for gateway services are specified in the IP_Masq MINI HOW-TO.
Testing the gateway machine:
The RedHat box should fire right up upon reboot. Proceed to the following
tests.
For Slackware, should fire up on reboot as well. Proceed to the following tests.
If you fail to connect, or any of the following tests fail, go to the troubleshooting section for some ideas on how to resolve the problem.
Testing the interfaces - Simply issue the command ifconfig, and it should return three (3) interfaces: lo, or the loopback adapter, eth0, your NIC, and ppp0, the connection to your ISP.
Testing the PPP interface - kill the pppd daemon a few times. Unplug the phone line from the modem. Make sure it redials properly.
Testing physical connectivity - ping the outside world by IP address (Use one of your ISP's DNS numbers you obtained,) then ping one of your local machines.
Testing name resolution - ping the outside world by hostname, for instance - ping ftp.foobar.com, then ping something local - ping filserver01.
Testing routing and gateway functions - issue the command netstat -r to examine your routing table. There should be four entries :
1. <your ISP assigned IP>, with no Gateway, a Genmask of 255.255.255.255,
Flags set to UH, MSS of 1500, Window of 0, irtt of 0, and an Iface (Interface)
of ppp0.
2. 192.x (or localnet), no Gateway, Genmask 255.255.255.0, Flags U,
MSS, Window, and irtt identical to the above, Iface of eth0.
3. 127.x (or localhost), no Gateway, Genmask 255.0.0.0, Flags U, MSS
3584, Window and irtt same, Iface of lo.
4. Default, <the same IP as number one (1,)>, Gateway <your ISP's
machine at the other end of the PPP link>, Genmask 0.0.0.0, Flags UG, MSS
1500, Window and irtt same, Iface ppp0.
Configuration of the client machines:
UNIX Clients -
RedHat - Using either linuxconf or the Network Configurator, set the
default gateway of your client machine to 192.168.1.1.
Slackware - Using netconfig, set your default gateway as above.
NT Clients - Open Control Panel/Network/Protocols/Properties/IP Address, and set your default gateway as above.
NOTE: For services other than http, smtp/pop3, icmp, and telnet, see the notes and tips section below.
Testing the client machines:
If everything has gone well, you should be able to fire up your browser
and be off and running with access to your mail server, access to the web,
and telnet access to the net.
If any of the above services does not work, see the troubleshooting section below.
If you need other services, such as ftp, real audio/video, cuseeme, and so on, consult the notes and tips section below.
Troubleshooting the installation:
Gateway Machine -
Make sure all three interfaces are being recognized. If not, reconfigure
the one that is missing.
Check your scripts and routing tables. If necessary, review the gateway machine's PPP and NIC setup for accuracy.
Finally, if you are having no success with the RedHat or Slackware whizbang PPP connection thingie, you can do it with the tried and true scripting method using the following technique:
Again, there are probably many better ways to do this, but this is what I came up with. You will have to create two scripts, one to dial up your ISP and login using chat and to configure your PPP daemon, pppd, and one to pass the chat program the proper information about your modem and tell it what username/password to send to the ISP's machine when prompted.
In my case, my ISP expects a Username and Password to be entered, using clear text. Then, the ISP's PPP daemon starts up automatically. The following examples are for this sort of configuration only. Depending on your ISP you may or may not have to modify them. See the References section of this column for information on other configurations.
In my case, I created two scripts, one named unicom (the script that dials the ISP and starts pppd, and one named unicom.chat, which contains the modem information and the expect/send pairs.
Using your favorite editor, create the scripts, save them, and them make them executable by issuing the following command - chmod +x <name of script>
Contents of the script unicom:
#!/bin/sh
pppd connect \
'chat -v -f /sbin/unicom.chat' -detach crtscts modem defaultroute \
/dev/modem/ 115200 &
Contents of the script unicom.chat:
TIMEOUT 5
"" ATZ
OK ATDT2213005
ABORT "NO CARRIER"
ABORT BUSY
ABORT "NO DIALTONE"
ABORT WAITING
TIMEOUT 45
CONNECT ""
TIMEOUT 5
"name:" your username
"word:" your password
When you are done, place unicom, and unicom.chat in the /sbin directory. run the unicom script from the command line. If all goes well with the following tests, then call the unicom script from rc.local, placing it ABOVE the ipfwadm lines you created earlier.
UNIX Clients -
Double check that your workstation has the gateway machine's NIC set
as it's default gateway (192.168.1.1 in this example.)
Ping 127.0.0.1, then the machine's IP address. If this fails, your networking setup is incorrect, or your NIC is malfunctioning.
If this goes well, ping the gateway box by IP address. If this fails, check your cabling.
Ping the outside world. If this fails, the problem lies in the gateway, not the client.
Now repeat the above steps, using hostnames instead of IP addresses. If it fails at any point, you have a name resolution problem. Check your DNS settings in resolv.conf, your hosts.conf file for the line - order hosts, bind, and your hosts file for accuracy.
NT Clients -
Double check that your workstation has the gateway machine's NIC set
as it's default gateway (192.168.1.1 in this example.)
Ping 127.0.0.1, then the machine's IP address. If this fails, your networking setup is incorrect, or your NIC is malfunctioning.
If this goes well, ping the gateway box by IP address. If this fails, check your cabling.
Ping the outside world. If this fails, the problem lies in the gateway, not the client.
Now repeat the above steps, using hostnames instead of IP addresses. If it fails at any point, you have a name resolution problem.
Check your Control Panel/Network/Protocols/Properties,DNS settings, making sure neither enable lmhosts for lookup or enable dns for lookup is checked in your networking setup, that your hostname and domain are correct, and finally, check your hosts file for accuracy.
Some notes and tips on particular services:
As described here, the gateway should support ICMP requests, Web Surfing,
SMTP/POP3, and telnet.
For additional services, particularly ones that assume things about certain ports, you may or may not need to load some additional modules at boot time.
For a complete listing of the supported applications, see the IP_Masq HOW-TO.
At a minimum, you will probably want to load the ftp module, and the real audio module.
Change directory to the /etc/rc.d/rc.local file mentioned previously, and add these lines BEFORE the ipfwadm rules you put in here earlier.
/sbin/depmod -a
/sbin/modprobe ip_masq_ftp
/sbin/modprobe ip_masq_raudio
NOTE ON MODULES: There are many more modules available, these are simply the two I use most. To add additional modules, just add them using the above lines as a guide.
Example rc.local scripts:
RedHat -
>snip of lots of stuff<
cp -f /etc/issue /etc/issue.net
echo >> /etc/issue
# Now, the stuff you add -
echo "Loading Masquerade Modules .."
/sbin/depmod -a
/sbin/modprobe ip_masq_ftp
/sbin/modprobe ip_masq_raudio
echo "Done..."
echo "Loading Masquerade and Routing Rules.."
ipfwadm -F -p deny
ipfwadm -F -a masquerade -W ppp0 -S 192.168.1.0/24 0.0.0.0/0
echo "Done.."
# if configured properly, no pppupd required
Slackware (with my script)
>snip gpm stuff<
# Now, the stuff you add -
/usr/sbin/unicom
echo "Loading Masquerade Modules .."
/sbin/depmod -a
/sbin/modprobe ip_masq_ftp
/sbin/modprobe ip_masq_raudio
echo "Done..."
echo "Loading Masquerade and Routing Rules.."
ipfwadm -F -p deny
ipfwadm -F -a masquerade -W ppp0 -S 192.168.1.0/24 0.0.0.0/0
echo "Done.."
pppupd > /dev/null
Slackware (with pppsetup script NOT RECOMMENDED)
>snip gpm stuff<
# Now, the stuff you add -
ppp-go -q
echo "Loading Masquerade Modules .."
/sbin/depmod -a
/sbin/modprobe ip_masq_ftp
/sbin/modprobe ip_masq_raudio
echo "Done..."
echo "Loading Masquerade and Routing Rules.."
ipfwadm -F -p deny
ipfwadm -F -a masquerade -W ppp0 -S 192.168.1.0/24 0.0.0.0/0
echo "Done.."
pppupd > /dev/null
References:
Previous Columns:
November, December, and January columns
Other:
IP_Masq mini HOW-TO
Ethernet HOW-TO
Net-3 HOW-TO
Network Administrator's Guide
Mastering Windows NT Server 4 (3rd Edition)
ISP Hookup HOW-TO
ISP Connectivity HOW-TO
Resources for further information:
Web Resources:
http://ipmasq.cjb.net/
http://www.redhat.com/
http://www.linuxgazette.com
http://www.linuxjournal.com/
http://www.cdrom.com/
Newsgroups:
alt.unix.wizards
comp.security.unix
comp.unix.admin
alt.os.linux.slackware
comp.os.linux.networking
comp.os.linux.hardware
linux.redhat.misc
Print Materials:
Linux - Installation, Configuration, and Use (Michael Kofler)
RedHat Linux Installation Guide - versions 4.2, 5.0, 5.1 (Red Hat Software,
Inc.)
Linux Gazette - (SSC Inc.)
Linux Journal - (SSC Inc.)
Linux - The Complete Reference (Walnut Creek CDROM Books)
Next month, installing a caching web and nameserver, some tips and tricks for advanced configuration, and some secrets to improve performance!
Linux Installation Primer #1, September
1998
Linux Installation Primer #2, October
1998
Linux Installation Primer #3, November
1998
Linux Installation Primer #4, December
1998
Linux Installation Primer #5, January
1999
Linux carries a similar promise for the global software community as the 1849 Gold Rush did for California. For software products and projects, Linux is software gold. When electroplated onto cheap, high-performance PCs, Linux offers all the appearance, performance, and function of the high-priced, real-gold offerings of the major vendors including Microsoft, Sun, IBM, and HP. And the analogy can be extended further. Just as the 1849 Gold Rush provided many expected and unexpected benefits to the State of California and even to the U.S. economy, Linux is poised to provide both expected as well as unexpected benefits to the software industry.
The 1849 Gold Rush promoted settlement of the State of California and provided discretionary cash at critical juncture in US history: funding art, educational, building, and other projects which would have been delayed by decades or never would have been accomplished at all.
Linux is already promoting the settlement of new software areas. Linux has provided the opportunity for many engineers to contribute to operating system kernel development, advanced networking, real-time scheduling, and super-computer design. The results have been an exciting array of high-quality, advanced software. Without the momentum offered by Linux, these areas would remain sparsely populated. Today there is a healthy collaborative community that has "settled" into and is growing and evolving in each of these areas.
Linux is capable of much more. In addition to enabling the settlement of new software areas, Linux provides a fertile seed as a kind of discretionary cash whose addition to the global software economy will fund advancements in the software technology and will broaden sources for commercial software. Linux "gold" can provide the extra resources needed to enable software projects that would otherwise be infeasible. This is especially important on a global basis. Today the world's software industry is dominated largely by U.S. companies. Commercial licensing and royalties feed the U. S. software hegemony and often stifle initiation of projects or products in developing countries.
Linux changes the rules.
Linux can provide an inexpensive yet strong foundation for large-scale projects which otherwise face a multiplier of licensing restrictions and fees. Even in the U. S., reduction of licensing fees multiplied over hundreds of equivalent machines was a major motivator for using Linux-based supercomputing to produce the special effects of the movie Titanic.
The combination of zero royalties and low hardware costs enable the prerequisite infrastructure of large projects to be built cost effectively. Furthermore, maintenance and upgrade costs can be controlled by the project more efficiently. While software evolution is more rapid under Linux than under commercial operating systems, each project nonetheless can select the upgrades and maintenance which are appropriate to its own specific requirements without arbitrary vendor upgrades and artificial external costs. Support cannot be withdrawn because a complete snapshot of the source code used for the project is always available.
For example, many large-scale projects exist which have been developed in the public domain but which are tied to a proprietary infrastructure. In one such case, the U.S. Weather Service has built a large, public domain source system for weather forecasting based upon Hewlett Packard's (HP) proprietary Unix operating system and compilers. The costs of implementing a national-scale forecasting system on high-priced HP equipment would be prohibitive to all but the wealthiest countries. However, with some effort, the entire code base could be converted to Linux and built using standard open compilers such as g++. Several template facilities might need to be reworked against the template limitations of g++, and data byte order assumptions embedded in some parts of the code must be resolved, but in theory such a conversion could be completed successfully. Then a top-rate automated weather tracking and early-warning system could be implemented wherever raw data could be obtained to feed the forecasting software. Although obtaining raw weather data is not trivial, literally hundreds of programmer-years worth of work on a world-class front-end weather system already has been provided. Once available under Linux, modern weather forecasting services could begin to become available to developing nations worldwide.
Product development also benefits from the same factors. Any number of commercial products can be built without the traditional dependencies on external licensing and support. The control of Linux-based software products can be fully vested in the project itself. Projects can be jump started with fewer legal and financial dependencies. New products can be built by virtually any source in the global development community and can compete on technical merit with few licensing constraints and no royalty encumbrances. Some examples might be a Linux version of the popular modem multiplexers such as Webramp, or Linux-based PDAs, office Intranet and file servers, etc. Linux is highly suited for building any software or firmware product that is service oriented and capable of being remotely, especially Web managed.
Products can be built:
The world's software industry has great intellectual talent. This wealth of talent is certainly in aggregate greater than any single company commands including Microsoft, IBM, Sun, HP, etc. But many software developers outside of the U. S. have been hampered by the steep cost of project startup and by the licensing restrictions that give ownership and control to others. Limited local opportunities further promote the "brain-drain" from developing countries to the U.S.
Linux helps solve these problems because ownership (copyright) of software is shared and typically does not require complex or onerous licensing arrangements. Equally importantly, project costs can be controlled so that they better reflect the actual costs without arbitrary expenses due to inflated infrastructure requirements or foreign license and royalty fees.
Linux is continually being adapted and revitalized and represents an ever more capable foundation to empower the world software development community. Linux is truly a renewable Gold rush.
After using Linux exclusively for a couple of years I began to feel a bit out of touch with the computer users in our rural community, nearly all of whom use some version of Windows. It had become more difficult to help people (especially over the phone) with computer problems, as my memories of Windows configuration and tweaking had faded. With a fresh set of YARD boot-disks at hand, I reinstalled Win95 from the CDROM, rebooted from the YARD disks, then reinstalled lilo on the usurped master boot record.
After completing this unpleasant and tedious task I felt that I deserved some sort of reward. Due to an inherent and insatiable curiosity about software and operating systems, I had ordered the BeOS release 4 CD a week or two before but hadn't gotten around to repartitioning a hard disk and installing it. In this article I'll discuss my first impressions of this BeOS installation, as well as compare the relative features, appearance, and usability of Be and Linux
The BeOS is a young operating system. Its hardware support and software availability reminds me of Linux in the mid-nineties. Since not everyone is willing to spend money on new hardware in order to run Be, the company is willing to refund the purchase price (no questions asked) if someone buys a copy but can't get it run. Luckily my hardware was supported, which was as much a matter of luck as anything else.
Be can only be installed on a primary partition, in contrast to Linux, which can be installed on any sort of partition. Earlier releases of Be were limited to IDE drives, but release 4 can be installed on SCSI drives as well, but only when connected to certain brands of controller cards. It happens that I have one of the supported SCSI cards, but the drivers are new so I thought I'd play it safe and make room for a new primary partition on my IDE drive. Be Inc. has licensed a limited version of the partition-resizing utility Partition Magic and included it on the Be CD. This version of Partition Magic is meant to be run from Windows, so it wouldn't be of much use to a Linux user without Windows installed. It's also limited to three preset partition sizes. I tried it but it refused to recognize the partition I had created.
The other method of installation is to boot from the supplied boot-floppy and insert the CD during the booting process. The new partition was still unrecognizable. To make a long story short, after several attempts I found that only one of the first two primary partitions would work for the installation. Unfortunately the first two primaries on my IDE disk were occupied by Win95 and Linux, so I ended up moving the contents of some Linux partitions on the SCSI disk, edited the /etc/fstab file to reflect the changes, and created a new first primary partition on the drive. BeOS installed without a hitch once it could find an acceptable boot partition, but it struck me as being rather picky about its partitions.
Be comes with a bootmanager (based on lilo) but I chose to add a new stanza
to my existing lilo.conf file, as lilo has always been dependable for
me and I couldn't see an advantage to using Be's. The stanza is simple:
other=/dev/sda1 label=be
Accustomed as I am to the verbose Linux boot-messages scrolling by, Be's seemed spare and uninformative. The messages which do appear let you know that a "boot-volume" has been found, and that unspecified devices are being initialized. Be requires about the same amount of time to boot as Linux if you add in the time X takes to start up.
The developers at Be have created a GUI which is reminiscent of both the Mac and OS/2 interfaces. Not spectacular or flashy, but nevertheless cleanly designed and functional. Linux users have become accustomed to configurability, a trait which aficianados cherish but which can be confusing to new users. A few minor tweaks of the interface are possible with Be, such as scrollbar style and desktop background, but the basic window appearance is hard-wired. In a sense, the legions of Linux programmers have, over the years, transformed a liability (X-Windows' lack of a built-in window-manager) into an opportunity. That this was even possible is due to X's inherent flexibility along with the availability of the X source.
Poking around in the directory tree I found some familiar names, directories such as /etc, dev, plus a directory /beos/bin which contains the standard unix utilities such as ls, cp, and the bash shell. These are Be ports of the GNU utilities; I suspect the source is tucked away on the CD in order to satisfy the GPL. An old, non-GUI version of the Vim editor is even included. These utilities can be run from a Terminal window, which is similar to xterm with the addition of a menubar from which font-size and colors can be set.
One interesting feature which I was interested in trying out is the support for more than one color-depth simultaneously (on separate pager desktops). This works, but not consistently in my case. Even though my video-card is supposedly one of the highly-recommended cards for Be, switching back and forth from an 8-bit to a 16-bit desktop will eventually result in a corrupted display. I quickly learned not to try a high resolution and then set it as default for all desktops, as if all screens are garbled rebooting is the only solution. A reboot into "safe mode" to reset the defaults is necessary when this happens.
This is one part of the OS which the Be developers got right. The screen fonts are crisply rendered at all sizes. Among the many demos included with Be is a very impressive font demo which displays fonts in a variety of ways: skewed, rotated, smoothly changing size, etc. One possible reason for the high-quality on-screen font display is that the only type of font currently supported in this release is TrueType, though Type 1 support is planned. TrueType fonts typically just look better than Type 1 on a computer screen, as can be seen in Linux when using one of the TrueType font servers such as xfstt. A basic editor is included with Be, called Styled Edit. It's similar to Microsoft's Wordpad in that it can use scalable fonts along with their bold and italic versions and saves the information in the file.
Be uses a new filesystem called befs. It sounds impressive; a 64-bit journaling filesystem which can store file metadata in a file's attributes. This is similar to OS/2's HPFS file-system, except HPFS files just have attribute pointers stored in the file; the actual attribute data is stored in a binary configuration file. When I first read about Be's filesystem I hoped that they had avoided using binary-database configuration files, as in my experience they cause more problems than they solve. The filesystem also has inherent database capabilities, whatever that might mean. I'd like to see a demonstration of this feature.
The be filesystem also has support for very large files, up to one terabyte. All of this sounds impressive, but without applications which make use of these features (I mean large applications which handle large amounts of data, such as video-editors) it's difficult for an end-user to see any particular benefit.
There has been much discussion in the past few months on the linux-kernel mailing-list about the feasibility of extending the trusty ext2 filesystem to include some of these features. People doing video-editing in particular would like large-file support; Linus Torvalds thinks that these people would be better off using a 64-bit machine for this sort of work, as the ability to make use of large files "comes with the territory". Journaling ability for ext2 is being worked on, and after a lengthy debate about file meta-data the consensus seemed to be that similar results can be achieved using the existing ext2 filesystem.
Be isn't a multi-user OS as Linux is, but the PPP networking is easy to set up. Unfortunately, my modem wouldn't respond. It turns out that external modems are autodetected well but internal modems can be troublesome. I eventually found a configuration window which allows the user to add a non-PNP ISA device, but it took me at least as long to figure out the format of the memory addresses needed as it ever has taken me to figure out a cryptic Linux config file. I finally found a newsgroup posting which explained it well, as well as several which claimed to but were wrong. Once over this hurdle opening a PPP session was easy, as long as the ISP uses PAP authentification. Otherwise you're out of luck. Once online stability of the connection and transfer rates seemed comparable to what I'm accustomed to with Linux.
A few network cards are supported, nowhere near the number which Linux supports. My card isn't supported, so I was unable to test an ethernet connection.
Printer support is very limited; only Apple Laserwriters and HP PCL3 Laserjets, and the Epson ink-jets are usable. I use an old Epson dot-matrix printer; even if my printer worked with Be I would miss being able to use Ghostscript.
There really aren't many, though much is promised. That's the chicken-and-egg problem with any new operating system: nobody wants to port applications until there is a sizable user-base, but people don't migrate to the new OS without those "killer apps" available. There are several e-mail clients, at least two word-processors-in-progress, but so far not too many of the audio and video applications which Be needs if it wants to live up to its nickname "the media OS".
One problem with application availability is the change in release 4 to the ELF file format, similar to Linux's. This means that programs written for earlier Be releases won't run on release 4. Evidently recompiling can be tricky, so there is a large backlog of ports and programs which haven't yet been updated for the new release. This was disappointing, as I was looking forward to trying the Be port of GNU Emacs. Another change made in release 4 is the adoption of the Cygnus egcs compiler as the default. Previous releases used a crippled free edition of the commercial Metroworks development tools. The software developed for and ported to previous Be releases was developed with the Metroworks tools; evidently some code rewriting is necessary to compile the old code with egcs. Yes, Be ships with a compiler, header files, make, etc., as does Linux, but the trend in the Be world is binary software distribution rather than the freely available source Linux users are accustomed to.
One of the most impressive applications available is Gobe Productive, a word-processor with spreadsheet and image-editing modules. The documents it produces are layer-based, similar to the usage of image layers in the Gimp. Speaking of the Gimp, the image editor includes a subset of Gimp plug-ins, though without preview windows. Unfortunately this application saves documents in Yet Another Proprietary Format, though RTF is also supported. Until Gobe Productive supports the ubiquitous Word file format (promised in a future release) it's unlikely to sell too well unless Be really takes off.
The apps situation reminds me of Linux a couple of years ago, minus the open-source tradition which kept Linux alive and thriving before the advent of commercial Linux interest (and is still responsible for much of the vitality and yeasty ferment of the Linux community). I admit I find the shareware-crippleware tendency in BeOS software to be a little irritating, but Be is unabashedly a commercial OS with all that implies. Commercial, but not above using driver and utility code developed by free software developers.
Right now the BeOS is not much more than potential. If Be Inc. can induce hardware manufacturers to write more drivers (and if enough users migrate) it may do well. This year is a perfect time for alternatives to Microsoft to gain user-share due to Microsoft's legal entanglements and growing public disenchantment. I don't think many current users of Linux will abandon it for Be, though I imagine there will be a significant number who will dual-boot if audio and especially video-editing applications for Be become available. I doubt the user-interface amenities Be provides are enough of an incentive to attract many current Linux users, as people who want these features are currently using KDE, with a stable release of Gnome on the horizon providing another choice. I believe in the principles fueling the free software movement, but not in an exclusionary sense. If Be should gain popularity and market-share the consequences will likely benefit Linux as well. Consumers will begin to realize that viable non-Microsoft choices exist; Be's unix-like structure could expose more people to the stability and other benefits of unix-like operating systems.
An Ode to Richard Stallman
(Or Minutes to the NYSIA/WWWAC Software Summit)
I recently attended the New York Software Summit held at the Fashion Institute of Technology (FIT) in NYC. This was a joint conference sponsored by the New York Software Industry Association ( www.nysia.org) and the world Wide Web Artists Consortium ( www.wwwac.org). I, being a subscriber to the LXNY mailing list ( www.lxny.org), was informed of this event by Jay Sulzberger, who was moderating a panel titled "The Free Software Movement, Open Source, and the Coming Free Market in OSes". I found the subject of this panel to be rather close to my heart, but being a 70 mile commute into NYC for me, I thought I would pass it up. I read the rest of Jay's announcing e-mail and saw two words which would eventually changed my mind. Richard Stallman. He was going to be on the panel and as it turned out, this was too much of an incentive for me to pass up. What follows is probably too much text to describe the event, but then, I'm drawn to the subject and I can't help myself. So please forgive my indulgence.
Day -1) It was a busy day for me. Rather a busy week for that matter. I have just started working on this new experiment called PHENIX, which is supposed to take data in 6 months. The experiment is 5 million dollars short, and with the engineering run coming up in 6 months, things are rather hectic. I remembered Jay's email about the software summit and pulled it up. I was still in my debating phase as to whether I should go or not (event though I knew Richard Stallman was going to be there) and with my current work load, I was starting to lean towards not going. I read through Jay's latest announcement and realized that the closing date for registering for the conference was today, at noon. It was 11:50am!!!! Oh God, I had to make a decision NOW. This was hard. The arguments were flying around my head. "The timing system must be worked on." "Richard Stallman." "It's a critical component of this detector and rather late." "Richard Stallman." "I worked all day yesterday and this morning on the system with two engineers at my side and we made a lot of progress." "Richard Stallman." "I would get a lot done tomorrow by keeping up the momentum on this project." "Richard Stallman." "The run is only 6 months away." "Richard Stallman." "Well, the run is 180 days away, one day off is less than a 1% effect." "Richard Stallman." "Screw the timing system I'm going...." So I grabbed the phone, and called one of the two numbers. It was busy. I call the other number, I got a recording to leave a message. (It is now 11:58am.) I left a message saying that I wanted to register. I then pulled up their registration web page. It was still active. I quickly filled it out, hit the submit button and some reassuring text appears saying that I have been registered. I know information technology better than that and decided to call again. (It's now 12:02pm.) I was able to get through and told the lady that I had just registered on the web and I wanted to get some kind of confirmation that my registration went through. She told me this could not be done for reasons which were too involved to go into now. Oh well, I did my best. I continued working on the timing system that afternoon.
By early evening I went back to my office and I got a phone call from a one Bruce Bernstein, who asks me if I'm his cousin. Bruce is the main organizer of this summit and his cousin is Stephen Adler, a particle physicist who works at the Institute of Advance Study in Princeton NJ. It turns out that there are two Stephen Adlers in High Energy and Nuclear Physics. This guy from Princeton and me. And Bruce is this other Stephen Adler's cousin. There are some cosmological forces going on here which confirm that I really should go to this summit. It was good that he called because I explained to him my rush to register for this conference at noon today. He say's "You registered on the web right?" "Correct," I reply. He say's "What? You don't trust the web?" I didn't want to reply to that. I did get what I wanted, verbal confirmation of sorts, from the summit organizer no less, that I was registered. I was ready to go.
Day 0) Up at 5:20 am. I wanted to catch the 6:25am LIRR into Penn Station. My commuting routine is working better. (See my article on Fall Internet World 98 for details.) I got to the train station with my new notebook in hand, with time to buy a bagel, coffee and catch a seat on the 6:25am express to Penn. My intent was to jot down some thoughts, as I was riding into the city, on my new notebook. But there was a problem. You can't type on your notebook, drink coffee and eat your bagel at the same time. I'll get this commuting thing right some day. The typing had to wait. I ate my bagel and drank my coffee, then fired up my notebook to jot down some notes. This was more of an experiment to see how well one can use a notebook on crowded trains. (The guy to my left decided to sleep in such a position as to pin my left elbow, making it rather challenging to type. I managed.)
I found the registration center which was in the lobby of building A. I went to look for my badge, and it was not there. They told me to go to the problem desk. The line at the problem desk was just as long as the line to get your badge. The lady at the problem desk looked and me and said "Sorry, I can't find your name anywhere. You must register with a personal check." "I have no check and I registered on your web site, check again" I demanded. Another shuffle through some hand written pages of "last minute" registrants and no Stephen Adler. Just then Bruce shows up. "Stephen Adler?", he looks at me. "I saw Stephen Adler on a list somewhere" he conjectures. "Just write him a badge" he orders the problem desk lady. And so it goes, the free software Gods implanted an image of my name on a list somewhere in Bruce's brain last night, and thus I get my hand written badge, reading "Stephen Adler, B .and. L". This is my ticket in, and I don't care if it should read, "Stephen Adler, BNL". That's BNL for Brookhaven National Laboratory. It has a rather Fortran look and I figure it must be a joke by the same free software Gods who got me to attend this meeting. (Physicists tend to write too much Fortran code anyway.)
The summit was organized around the following format. Two parallel breakfast sessions, one for the NYSIA and one for the WWWAC. Two morning parallel tracks, a lunch with key note address, one afternoon parallel track, and a plenary with a keynote panel at the end. Stallman was going to be on the 11:15-12:30 panel on free software and the keynote plenary panel at 3:30-4:45.
I took off up to the 6th floor to attend the NYSIA breakfast panel. The first of two keynote speakers was Steve Malanga. His topic was trying to analyze the city of New York and why it didn't have more of a software industry. The talk was rather boring and bureaucratic. Lots of charts showing job growth over time, how NYC was able to gain back the number of jobs it lost during the last recession, etc. He was trying to point out that there is a big software industry in NYC but under a different name. Wall Street. (i.e. Wall Street recent hires account for a large technology sector.) Around me were about 100 people, and I had one of two notebooks there. An indication of the backward technology culture of NYC. The 6th floor, where they were having this panel, was the dining area of a cafeteria. There were long tables with white tablecloths and plastic chairs in the room. The architecture of the place gave it a bit of a 1970's look and feel. When I got there, the panel had started and I was proudly pulling out my notebook. The problem now was the tablecloth. I had set down my coffee cup on the table, and baglet to its side. (As in a little 2 inch bagel. Why not, applets, servlets, baglets, what's the difference.) The chairs were one against another so as I tried to get into my chair, the domino affect caused the two chairs to my right to push up against someone else's chair. I then sat down and as I pulled my notebook out of my bag, this shifted the table cloth around and almost spilled my coffee on my notebook, ugg.... Food and notebooks in tight places don't mix. Eat your food and then deal with your notebook. Or get a firm table, firm chair, no table cloth and keep your coffee as far away from you notebook as you can reach. I have this recurring nightmare of spilling coffee all over my notebook. It's going to happen, it's just a matter of time. In any case, let me get back to the talk. It was boring, so I left to the Java breakfast. The Java breakfast was better. The speaker, David Gee, works for IBM and is passionate about Java. He said so in his talk. One interesting note from his talk was that he claimed that NT systems were up 97% of the time. I'm not sure if this was a good or a bad thing, but the number was clearly pasted on one of his .ppt pages. Then there were things which bothered me about his talk. He was over selling java. He kept talking about how he wanted to have all information accessible to him at all times, where ever he was in the world. And he kept using the airline industry as his best example. He wanted to know those important things like; What is the model of the plane he was going to fly on? What is the seating layout on the plane, so that he wouldn't get a seat where the window wasn't just so. What was the latest stock quote for e-bay? And he wanted to get all this information from his notebook plugged into the RJ45 outlet in his hotel bathroom. This type of over trivialization of information technology tends to kill the application you're trying to sell. This guy then pops up a .ppt page with a picture of a shrink wrapped java development package on the screen. He says "I am not plugging or selling this product...." and then rattles off a full list of the features of this software package. With that bit of hypocrasy, I packed up my notebook and headed out.
The first track of parallel sessions was going to begin soon, and I chose to attend the digital music one. One of my colleagues had told me about mp3.com a couple of days ago and I realized that the music industry was going to be turned on its head within a year. It turned out to be where the NYSIA breakfast panel was held. So back up the elevator I went to get an ear full of digital music talk.
There were 4 panelists. Nick DiGiacomo, a consultant, Michael Robertson of mp3.com, Howard M. Singer a2b music, Dick Wingate, liquid audio. The discussion was good. I was planning on just attending this panel for a short while and then go off to other panels and talks, but the discussion was so good and of relevance to our life on the Internet that I stuck it out. The deal with digital music is the following. The bandwidth and compression algorithms have converged such as to allow the free availability of CD quality music over the Internet. This is very much to the tune of open sourced software about 10 years ago, but now the general public is getting into the act. The problem; a large, powerful, wealthy establishment is fighting very hard to control its market and preserve the status quo. Three of the panelists, the guy from a2b music, the guy from liquid audio and the consultant are clearly trying to work with the industry. They talked on and on about how to restrict content. On the other hand, Mike Robertson from mp3.com made a very brave statement. He said that talking about security was like talking about morality. You cannot talk against it. But he continues to say that it is impossible to try to restrict the distribution of music. He then says that freedom over content will rule the market. Talk about security is nonsense and driven by the oligarchy protecting their business model which is music distribution via CD. The audience applauds. (The only applause during this session.) What I got from this session is clear. Battle lines are forming on the distribution of digital music over the Internet front. On one side you have you, me and the artist, on the other side you have the rich and powerful establishment. The establishment is working hard to introduce "security" into the distribution of music content. "Security" only deals with how one can restrict access to the content. It has nothing to do encrypting the music itself. (I'm not sure how you would restrict access without encrypting the music itself.) This was emphasized by the consultant. This will be done by adding restriction signatures to the music. For example, a two day license for a song would work such that you download the music, your hardware gizmo or software applet plays it for two days and then plays it no more. The control of who and for how long one can listen to the music is under control of the artist, or so says the industry consultant. Reading his lips, I hear, the music is controlled by those who sell it, those being the establishment. And it's clear that the establishment is starting to wake up to the fact that distribution of music over the Internet could very well destroy their whole business model, and them with it. MP3.com is on the road to changing this. It has a 50-50 deal with the artist for what ever is sold over their web site. And the artists keeps ownership of their work. Right now, when a band cuts a record, the music is then owned by the recording company and belongs to the band no more. The band then gets about a 20% cut of the sales. Also, a band must sell more than 250,000 CD's in order not to get dumped. These are very large obstacles for bands to overcome in order to get their music heard by the general public. And guess what, the new music I hear over the radio and on MTV all sounds the same. To me, this is a clear fallout of the restricted access musicians have to the general public, set up by the music industry. But the Internet and web sites like mp3.com will change all that. Another point made by the Mike Robertson from mp3.com, the record industry is not going broke with the current method of music distribution via CD. It is making lots of money. So to them, it is important to maintain this status quo. Clearly, the Internet has the power to change all that. Other side issues which were discussed were audio formats. a2b and liquid audio were all hot about their standards, those being closed ones. The guy from mp3.com commented that open standards win on the Internet and I'm sure time will bear this out. There was more to the discussion which I cannot remember and I failed to write down in my notes, but it was a good prelude to the next session I was going to attend, the free software panel.
The free software panel was being held in building C and I was in building A. So down to the lobby I go in search of building C, somewhere on the campus of this Fashion Institute of Technology. In the lobby, I find Jay Sulzberger at the problem desk. It looks like web registration technology failed him as well. Jay is the moderator for the free software panel and who also invited me to be a panelist on another panel held last fall for one of the LXNY meetings. The subject of that panel was something like free software in your business. It was my first chance to talk about my work to a non-physicist audience and I jumped at the chance, even thought the subject was not physics. I figured I used enough free software in my work that I would be able to fit that topic in somehow, amongst my aerial photo transparencies of high energy physics laboratories across the nation and the world. So, as implied in what I just said, I have already met Jay. I waited for him as he finished up with his problem at the problem desk, (web based registration technologies, hmmm....) This gave me a chance to walk with him over to building C in search of the classroom where this free software panel was to take place. On the way we chatted about something, I can't remember if it was quantum computers, free software or his admitting to being a gun nut, as is someone else who is an acquaintance of ours.
We found building C, we found the 3rd floor and room C324, the room where Richard Stallman was to grace us with his presence. Richard Stallman was not there when Jay and I showed up. The rest of the panel and about 20 people who made up the audience were there. The class room was wide and set up in such a way that the desks were close to where the speakers stood to address the class. The desks were these long tables with a black hard surface table top, no tablecloths. These tables were certified notebook friendly. The chairs were high and rather comfortable. They kept you at attention as you sat in them. I got a chair two rows back from where the speakers were to address the audience, centered in the room. I wanted to be in the center of this room in order to absorb all that was to transpire. I set up my notebook, popped open the netscape browser editor window, and Jay came over to continue his talk about quantum computers. I think this was just an excuse to come over and checkout what kind of software I was running on my notebook, since I noticed his subtle glance towards my notebook screen as he leaned over to tell me about NMR probes, coffee cups, statistical mechanics and how engineers can make work what physicists dream up. (Which is true, sometimes...)
Things start to settle down in the classroom. I notice that most of the people who made up the audience for this panel discussion are guys like you and me. We don't wear formal clothes. We have a solidity and ruggedness in our manner. Jay definitely is heavy on the ruggedness side. We have thoughts to be shared and passion in our hearts about the work we pursue in our daily lives. But to counter balance this atmosphere of technology pioneers, there were about 3 or 4 guys who sat together towards my right in the back corner of the class room. These guys stood out. They were formally dressed, each one. They have a fragility to their manner. It's different with these guys. They obviously have thoughts to be shared, I can't really account for the passion in the heart, but they do have something the rest of us don't. Money in the wallet. Lots of money in the wallet. These guys are "the establishment" and will play a very interesting role in the events to unfold.
So there I sit, waiting for the panel discussion to start, Jay is outside trying to give away free software to anyone who walks by the classroom door, and we are all waiting for Richard Stallman to show up, so that we can start this damn thing. Jay has now scared off half a dozen people who were unfortunate enough to have walked by the door, and has given up waiting for Richard. Jay begins. He tells us a story about how the free software movement started with Richard. Back some time ago at the MIT software labs, Richard was trying to print to some ding dong printer and couldn't. There was a software bug which stood between him and his printout. Richard wanted to solve the problem by getting the source code and fixing it. He couldn't, the source code was not available and more important, could not be made available because the company who sold MIT the printer would not hand over the code. The code was locked up behind legal doors and Stallman was not going to be able to solve this problem. Thus the beginning of the free software movement which has evolved into what we know today. With that story told, he introduced the panelers who were present. Jesse Erlbaum, a man who wrote or uses object oriented perl extensions, Elliotte Rusty Harold who is an XML expert, Jim Russell from IBM, who is "a herder of serious cats", and Dave Shields, also from IBM who would talk a bit about Jikes. Jesse, the perl guy and the XML guy went first in introducing themselves. The first one talked about how he couldn't do his work without source code available software. The second guy talked about how XML will be a replacement for a lot of file formats including RTF. One of the big problems with word processing is that for all practical purposes, file formats are not convertible thus forcing you to buy the software in order to read the file. An MS business model no doubt. XML will fix all that. Then went the two guys from IBM. The first one talks about Jikes, how IBM was able to release the source code to the Internet (but under a restricted license agreement which I'll go into later), and the /. effect. Once Jikes was released, there was a post to slashdot about it and the Jikes upload site experienced that /. effect. The Jikes project went from #5 on the IBM upload list to #2 in two weeks. He showed a nice plot of the integrated number of downloads of Jikes for different platforms. It looks like the windows version was released first. 15 days later, the linux one was released and about 5 days after that, it over took the windows binary upload count. IBM now has hard concrete data to show the linux does count! The second IBM guy, Jim Russell, talked about how it was not so difficult to convince higher management at IBM, that it made good business sense to release the source code to something like Jikes, and thus earning Jay's title of "herder of serious cats".
At some point during these introduction talks, Richard Stallman walks into the room. I get to see the man for the first time in flesh and blood. He stands about 5 foot 5 inches, has long black hair and a beard. He carries a cloth bag in which, as I later learned, he keeps a notebook, amongst other personal objects. He would melt right into any university setting, (or high energy physics laboratory for that matter). He starts to clown around with Jay. He starts making horn signs above his head from behind, as Jay continues to read his introductory remarks for the next panelist. This goes on for a bit and the audience is getting a real kick out of it. Finally, Jay turns to see Richard, he freaks and this kidding around ends. Jay continues with his introduction and Richard starts to make himself at home in the classroom. Off go his shoes, out comes his notebook, and he finds a quiet place under one of the tables where he fires up his notebook and begins hacking at some code or other. Jay continues with the introductions, the panelist continue with their opening remarks and Richard is oblivious to all this. He gets up from under the table, paces back and forth around the entrance to the class room, (in his socks,) getting ready to address his audience. It's like he is doing mental laps, warming up for the upcoming discussion on free software. (Don't forget, we have the establishment sitting in the back right corner of the room. It's going to be Richard vs the establishment.) Jay finally gets around to re-introducing Stallman. Stallman starts by saying that he is the president of the Free Software Foundation. He continues by saying that he is not speaking about the "open source" movement, and he does not care about making computers easier to use. At this point, I sort of lose the specifics of what he has said, (since my notes are rather jumbled) and I will try and paraphrase what he said. Basically, his concern is on a global social historical scale. The free software effort is about freedom, not software which costs nothing. A freedom which goes beyond source code and into the way we interact as a community. Free software is a manifestation of this freedom and is an example of it. I think it's best to see this in the opposite sense. When you are encumbered with software which you cannot change, even if you have the source code in front of you but are not allowed legally to change and distribute the changes, then your personal, inherent freedom has been taken from you. That same freedom the US constitution gives you which is the right to life, liberty and the pursuit of happiness. Some other important points which Stallman says during this discussion is that people confuse Linux with GNU. Linux is only the kernel, and works in conjunction with all the software on your PC. I would describe Linux has being the conductor of a symphony. The musicians are all the apps we run, and GNU being the concert hall itself, which with out one cannot have a concert. (This is my metaphor, not Stallman's, but I think Stallman was trying to get this point across.) He does not like web sites which are set up for the public good which run add banners. (I think he is talking about sites like /., linux.org, etc.) And he pointed out that he runs debian GNU/Linux on his notebook. (Which fits right in with his persona.)
Stallman's introductory remarks never really end. The more he talks about the freedom of software development, very much on the same plane as freedom of expression, the more the intensity of the room discussion heats up. The best word to describe the rising level of the intensity of the discussion is passion. And there was lots of it. The passion level took a step function when the "establishment" chimed in. The elder of this group asked the question, what if MS opened up windows 98 source code under the GPL? At this point in time Jay was out in the hallway offering free software to some innocent person passing by, hears this, jumps back into the classroom and exclaims, "What? Open Source Windows!", and just about collapses on the floor. The question needed to be answered, the room goes silent and Jay takes the floor to answer the question. The question being more broadly if MS would continue to make money if Bill Gates GPL'ed the source code to windows '98. Jay's answer is no. There is a free market economy which you must deal with and in such an environment, Microsoft would perish if it GPL'ed its OS source. He continues by emphasizing that justice would be served and the company would die a rightful death. (Jay also holds this sentiment for Apple.) Stallman forces his way into the discussion; No, MS would be redeemed if it GPL's its source code. Jay has a fit. Jay exclaims that MS and Apple should both die. MS would have to live through a million cockroaches lives before it could be considered for a redeemed life! But Stallman is adamant. MS would be redeemed if it fully GPL's its source. But Stallman if firm, MS cannot take half steps and do something like IBM did with Jikes and just release the source under a restricted license. Its full GPL or it's worthless. In the meantime, the guys in the establishment corner are trying to force the issue that one cannot make money on software if you release the source code. The back and forth on this subject goes on, issues such as opening up file formats to help free up the software industry rise and are batted around. Jay finally ends the discussion since we have run out of time.
As the session ended, people broke up into smaller discussion groups. I packed up my notebook and headed over to the group which surrounded Richard. There was one female who had his attention at the time. (I think there were 3 in the room.) She was a reporter of sorts, from England, trying to get some private time with Richard for an interview. He was all booked up and really wouldn't give her the time of day. I don't know why, she was all in a tizzy to get time with Stallman, and she was full of spunk too. (I think she would have given Stallman a better writeup than I'm doing now...) Somehow the discussion started on Linux vs GNU and the confusion thereof. This gave me a chance to butt in and I asked Richard about his kernel. "Yes, I have a kernel project called the GNU/Hurd". I knew about this project already, but I just wanted to get a word in. "So what happened to it?", I asked. He starts to tell me about some of the key architectural features of his kernel and clearly it was a big complicated implementation of a distributed kernel. I guess any type of distributed kernel would be complicated and thus it seems to have not made much progress. He made a comment that he did find one guy who has actually tried to run it. One of the "establishment" guys was there listening in on this discussion. The conversation then turned to patents. I made a comment that patents are there to protect the "investors" and not really the inventor. Richard agreed with me. The guy from the "establishment" tried to argue that patents are there to protect the inventor and to help market the inventions so that the general public can benefit from them. He continued, "if you could write software which would cure cancer, then a patent on it would get the cure out to the masses." (I'm paraphrasing here...) My comment was that in principle, this is what you would argue, but in practice, the inventor gets a very small piece of it. Its the large corporations and those who run them, who end up owning patents and who get the profits from such patented inventions. I continued by telling Richard that I, working for the Department of Energy, signed a work contract which had a clause in it that said that all my ideas would belong to the government. The federal government now owns all the intellectual properly which comes out of my brain. And if there are some kind of patent rights given to me, the lab makes no effort in telling me what they are, since I have no idea if I have any such rights. This must be the case with a lot of research firms across the world; Lucent, IBM, etc. The discussion continued further in terms of how we can try to protect ourselves from the "establishment" abusing the patent system. Finally I stuck out my hand and introduced myself to Richard and told him I wanted to thank him for all the good he has done for the software community. He shook my hand and then turned to this "establishment" guy who was leaving and said that he was going to work as hard as he had to, to defeat him. He said this in a raised, angry and attacking voice. I was taken back by the strength in his conviction. It was genuine though. I then wandered off to another small group, and talked to Jim Russell. I introduced myself and asked the question, "Why do we get so passionate about software?". The idea being that, those who write software and publish it on the Internet should do so and that's it. What's all the fuss about? We talked a bit more about distributing source code. I stuck around a bit after that, but finally decided that I better get back over to building A and get lunch. Lunch was included in the registration fee and I was not about to miss out.
I got to the cafeteria where lunch was being served. Not bad, they had real plates and silverware, unlike the BNL cafeteria which now serves everything on paper plates or plastic containers, with plastic utensils. As I got there, everyone had already eaten and the keynote speaker was starting to deliberate. He is NYC Comptroller Alan Hevesi, talking about the woes of the software industry in NYC. The city is in 9th place across the country when you measure the software industry on a per-capita scale. Some of the comments which stuck in my mind are the following. (I didn't take notes on my notebook since I wasn't about to open it next to my chicken lunch. There was the remainder of a large coffee spill on the table cloth next to me. That could have been on the key board of my notebook. Ahhhh....) NYC had to pay out $900,000,000 to the new york stock exchange in tax exemptions to keep it from moving to NJ. The speaker blamed that on those attending the summit since the attendees had made it is so easy for anyone to set up an information system anywhere to do their business. The EZpass system is a wonderful piece of technology which allows traffic to flow past the toll booths surrounding Manhattan. But, this means that the toll collectors are out of a job. The speaker was quite sensitive to the dangers of high tech information systems. In a few years, there will be no more phone operators. There will be one recording serving all business and those who worked at those jobs answering phones will be looking for other work. Another comment he made was that a new tax break was being put on the books. Anyone in NYC who uses hardware to write software, does not have to pay taxes when they purchase that hardware. This statement caused a great round of applause. Another comment the speaker said which I want to share is this. (It is taken out of context but it stands on its own.) When the phone system was being installed in Russia, Stalin gave orders not to install phones in every home in Moscow. Stalin was afraid that he would loose control over the exchange of information amongst the citizens, if they had access to phones, and thus his control over the citizenry and his hold on power. To me, this was a very insightful comment about the power of information technology and ties right in with another article I wrote a couple of months ago.
And so the talk went. I had my fill of a tasty chicken dish, listened to this guy go on about the lack of a recognized software industry in NYC, and had a very nice view of some 1920's looking architecture outside the window I was facing. One last note on lunch. To my right, I overheard some guy mention slashdot. As I looked over, I saw this young guy, who was wearing a netscape pin on his blue sports jacket. He was talking to an old guy, (60's or so, "establishment" looking guy) and told him that he checked out slashdot about 4 times a day. This older guy, who had his back to me, was writing something down on a business card. The URL of /. is my guess. So there you have it, the young teaching the old on how to survive in this Internet world...
After lunch was the 3rd parallel track. I went to the talk on CORBA. I did so since I've just signed up to the ORBit mailing list and I'm in the process of learning how to develop distributed objects using IONA's implementation of the CORBA standard. The talk was given by an IBM'er Jason Woodward. He was excited about CORBA technology and how IBM was using it in conjunction with Java. The talk was laced with comments plugging IBM's e-business solutions, but if you ignored that, you got a rather general overview of distributed object computing. He talked about the battle lines being drawn between MS version of this application named COM and CORBA/Java. The talk was given at such an abstract level that it never answered my perennial question, where's the ORB in CORBA? (Being that I'm new to this distributed object thing, knowing which software component does the ORBing is important to me. It all seems to be hidden in "the implementation".) In any case, I asked a question at the end, (a rather loaded one) which was, "Is COM a strict open standard and how will the open source movement, implementing the CORBA standard, play out in the future of CORBA?" He answered by saying COM is not an open standard, and open source will do good things to CORBA. Just what I wanted the audience to hear, especially since during his talk he gave the well worn example of betamax vs VHS. Betamax being the proprietary standard and VHS the open one. Thus the answer to my questions were seen in a more compelling light. CORBA would win, MS would loose.
The day was winding down, the 3rd set of parallel sessions was over and now it was time for the grand finale. The keynote panel on the future of the Internet/software industry in the next 5 years. Richard was going to grace this panel. Needless to say, the panel discussion turned into a passioned debate over free software. What do you expect with Richard Stallman on the panel. The panel took place in some big auditorium in building C. There was room for about 500 people and I would say there were about 200 people there. I got there about 10 minutes before it began. I spotted Richard Stallman pacing around, getting ready to take us on. Later, I saw him sitting alone behind the panelist table typing away on his notebook. Taking advantage of some quiet time to hack at his hurd kernel maybe? It was a calm before the storm.
Bruce Berstien took the mike, called on every one to sit down so that the panel could begin. He then introduced himself and continued with an award presentation to Sheldon Silver, a speaker of the New York State Assembly. Speaker Silver had the flu, so Robin Schimminger, Chairman of the Assembly Commerce on Economic Development took the award for him. The plaque was to thank Sheldon Sliver for making it possible to get this new hardware tax break onto the books. Bruce was very proud of his award. It was a nice big shiny plaque. Robin, who took the award, made some remarks which I can't remember and left. Bruce then introduced two moderators, who would lead the discussion, Tom Watson and Jason Chervokas, co-founders of @NY. The first one introduced the panel, Stallman, Jim Russell, the same IBM'er who was also on the Free Software panel, John Borthwick, someone associated with AOL and the development of ICQ and finally Gerry Cohen, CEO of IBI, an "establishment" guy. (I'll explain later.) The second guy from @NY, starts the discussion by asking a question to Richard. Richard ignores the question and makes a comment criticizing the award given to speaker Silver for the tax break. "Tax breaks are bad" and goes down some tangent about how local and state governments screw the poor in order to offer corporate welfare to the rich "establishment". I guess you had to be there to feel the embarrassment of the situation. Stallman had no quandaries ripping apart this shining moment which Bruce had polished up by giving away this plaque with great fan fare. I have to give it to Richard. To him, there is no difference in the phrases, "freedom in software" and "freedom of speech". At some point during this panel discussion, he comes right out and says that he is a social activist, pursuing any avenue to advance social justice and freedom. The gloves are off. The moderator takes control over the discussion by asking questions to the other panelist. The guy from IBM made a small speech in which he thanked Richard Stallman for the work he has done in fostering the GNU movement and all the good software which has come from it. My hat goes off to IBM! He then continued to say that what IBM cares about is delivering technology to its customers in a form that the customers want. If this includes source code solutions, then that's what they will deliver. He mentioned that IBM had joined the Appache effort, providing AFS support for linux (although I don't think AFS is open sourced.), the development of Jikes in a pseudo source code distribution strategy etc. When it comes to the plumbing of information technology systems, IBM does not care how it gets built, fixed or distributed. Their goal is to provide systems, service and solutions to those who ask for it. The guy from AOL/ICQ during his open remarks talked about this ICQ product which I've never heard of before. Its some kind of Internet communication tool, a GUI version of the unix talk application maybe? It relies on a server and freely distributed clients. The amazing thing about this product is how widely it is used. At one point they released a new version of their client and they got 1e6 downloads of the client in 3 weeks. 6e6 people are currently using it. The guy talked about how they watch their xferlog files and see the correlated accesses to their upload site. A whole city will suddenly start to download the software, a whole country would follow. To me, this is a glimpse of future (current?) software distribution for all companies doing business over the net. The last guy to speak, Gerry of IBI, the "establishment" guy, was a real piece. He controlled a very large company in NYC. The unfortunate thing is that he really was not up to speed on what is going on right now software-wise over the Internet. He made one classic mistake. He talked about what he didn't know about. First off, he did make a good point that besides new software efforts, there was the whole backlog of old software systems which need to be kept in place. Somewhere in the city of New York there is a system which is in charge of cutting all the checks for NYC workers. It's old, and has to be maintained. This is obviously a big job. But this was about the only useful comment he made to the discussion. While the discussion raged about free software and tax breaks, he made a comment that linux has only been around for 6 months. Richard and the audience jumped all over him for that. He then asked the rhetorical question as to which of the two web servers, Apache or Netscape, was better? (He asked this question with a tone which implied that Netscape was the better server.) The audience quickly jumped in and told him that Appache was faster and more reliable. He then made the statement that customers want value from their software. "When was the last time you heard a customer walk into a software store asking for freedom?". Clearly getting back at Richards statement that free software stands for freedom not $0 cost software. Finally he made the comment, "All this software is so GNU! GNU, new, get it?..." Richard got pissed and attacked him rightly so. Then there was this question from the audience. "Who do you sue?" Richard fires back, "Do you sue someone if the plumbing breaks in your build? No, you get it fixed." The guy who asked the question replied that he would fix the plumbing and then sue someone for damages. To me, there is something wrong with this type "free market economy". The final comment which I want to write which Richard Stallman said was that he was appalled at states going around trying to under cut each other by offering tax breaks to large corporations to induce them to leave one state and settle in another. A comment from an "establishment" guy in the audience was, "What's wrong with that? Its a free market." Richard exclaims, "A free market in tax breaks? Oh GOD!" He then says that states should form a union, go to the federal government and get it to pass some laws forbidding this activity. He concludes this chain of thought by saying, "The name of this union is called, the United States of America." That to me, Stallman is true patriot.
The discussion went over time by about 20 minutes. And it was passionate. Poor Bruce hand to get up in the middle of it to defend his award given to the city assembly speaker declaring that the tax break was not new, but a "straighting out of the rules", since all manufacturing equipment bought in NYC pays no tax. Those well worn issues of how one make money with open source technology were batted back and forth and Richard always won the argument. Gerry, IBI's CEO, said at one point that SAP, the second largest software company in the world, does not give away its software for free, and it never will. SAP customers pay lots of money to buy their software and don't want it to be free. Richard responds by saying that he is going to write a GPL'ed version of the software SAP sells. It will take time, but there will be a freely, source code distributeable version available sometime in the future. How can you argue with that. As for the ICQ developer, Richard was going to write an ICQ server equivalent and GPL it. This made John Borthwick sit back in his chair and exhale. The fact is, Richard stands on the moral high ground with his GNU Public License. And no one, mind you, no one, can stand higher than him on this issue. He has taken the freedom of source code distribution via GPL and has turned it into a powerful venue to advance social justice. And the power behind Richard's morality is nothing other than the unhindered flow of ideas over the Internet. Richard knows this, he mentioned something about working together to make sure the commercialization of the Internet does not hinder this freedom of information exchange. This also ties in with the comment made at lunch about how Stalin, who was the mid 20th centry Russian one man establishment, was afraid of losing control over his citizens by the installation of phones in Moscow.
The discussion finally ended. I went up on stage to see if I could get in on some of the post panel discussion groups. I noticed Richard was being sought after by another female journalist, this time working for Wired. He was in the process of giving his card to her and it seemed like this time he was going to grant an interview. I had a hard time trying to get into any of the conversations and figured that it was time to go home, which is what I did. The rain awaited me, as I left building C of the Fashion Institute of Technology. I quickly walked up 7th avenue to catch the express back out to Ronkonkoma, my Long Island destination. As I was on my way home, I stood in a crowed train cabin, the windows fogging up due to the human density, as the train rocked back and forth on its way east. This quiet time gave me a chance to go over the day's events. On thing is for certain. The trip was well worth it. I thanked the free software gods for tearing me away from the PHENIX timing system for one day. The final panel discussion ended with the same question put to each of the panelists. "Where do you see the Internet in 5 years?" To me, this is the unanswerable question. No one knows. At the beginning of this century, when new models of the atom were being developed by Rutherford, Bohr and others, no one knew that their work would lead to something as powerful and destructive as the nuclear weapon. In the case of the forecasting "the Internet", looking back will not tell you where we are going or will end up. The only thing we can do, is stay informed of what is going on now and work with the new ideas which are presented to us by our peers. Those who do this, will be the "Internet pioneers". And what strikes me most, by the discussions during the day, is that time and time again, the "establishment" were not adapting to new ideas. IBM being the one exception. The recording industry is one example. Gerry, the CEO of IBI, who mocked Stallman with his new/GNU joke and the suits in the audience who wanted to know who they were going to sue, are all in for a big fall. On the other hand, those who understand what it means to have the freedom of modifying the source, have the future in their hands and the Internet will be theirs for the taking.
This e-mail is from Richard Stallman himself. He wants to clarify some points I wrote in my article. Click here for further details.
Original article can be found at http://ssadler.phy.bnl.gov/~adler/Stallman/Stallman.html
While an avid Linux user, certain projects I undertake back me into the corner of another operating system.
The project involved some BASIC source code which compiled fine under Microsoft QuickBASIC 4.00b. However moments after the software was run, the runtime library choked out a useless message about how it ran out of memory cleaning up string space. In the interpreter, it worked fine endlessly.
From what I could tell, the source code wasn't doing anything sneaky, and a search on the web revealed this as a true error.
The solution, or so I thought, was to upgrade to QuickBASIC 4.5 and recompile.
Now, I don't know if you've been to the stores recently, but QuickBASIC isn't on the shelves. VisualBASIC, its successor is. And one thing we know is that QuickBASIC code doesn't run under VisualBASIC.
Considering there was a lot of source riding on this, I called Microsoft about their Ultra Super Glow-In-The-Dark Intergalatic MSDN subscription. One thing I can say is that they are prompt in answering the phones.
In case you don't have MSDN, you basically get a set of 50 CDs with every Microsoft development package known to mankind on it along with three CDs containing all the bug reports Microsoft wants you to know about.
However, you've paid for a lot more, and I knew that. So I asked for every possible CD I was entitled to, international or not, to be sent my way.
You can imagine the surprise I got when a crate arrived a few days later with five binders chocked full of useless CDs. There was software for everything, in every language, including sign language. Now I also know the quick way to get extra binders when they wear out.
As I pawed through my new booty, I located DISC 3, Development Tools International, part number X03-56050: a bright blue disk that read "Microsoft Quick Basic, Visual Basic 1.0 for DOS, Microsoft Basic PDS 7.1, Visual Basic 2.0 for Windows Professional Edition, Visual Basic 4.0 Enterprise Edition." January, 1998.
I slip that baby into my CD-ROM drive and up comes a file called "Qb_45". Oh, yes! Uh, "Qb_45.jpn". Oh, no.
Yes, QuickBASIC 4.5, the version I needed was in my hands, only in Japanese. I attempted to look for an online tutorial for Japanese, but apparently that's not development material.
Ah, but the Japanese must know how to program in English! So, with a false sense of renewed glee, I installed... only to see my screen fill with gibberish that could not be mistaken for anything other than characters for a Japanese codepage.
No problem, I thought, I've missed the English version. I'll just go through each development CD, but alas, no QuickBASIC was to be found.
So, I was back on the phone with customer support.
Now it took a little while to explain to the person that I was missing some vital CDs. In fact the entire CD inventory was read to me over the phone and I dutifully hand-checked to make sure I had each CD.
"Yup, you have them all sir," came the sweet voice at the other end.
Hmm, this time I decided to take a different approach. "Whew! I'm glad. Before you go, could you tell me which disk Windows 3.1 is on?"
I was very much aware that THAT was not in my inventory, but should be.
"Hold on sir." After a short round of MS-Music, "that's on our Platform Archive set."
"Which is included with the Intergalactic MSDN subscription?"
"Yes."
"Which wasn't sent to me?"
"Correct."
Apparently my definition of "every possible CD I'm entitled to" differs from their's. "Could I have that sent to me?"
"Sure!"
"Before I let you go, could you tell me if DOS, Windows 3.1, and QuickBASIC 4.5 are on it?"
The person was kind enough to check with a supervisor. "Yes it is sir, it's a 20 disc pack."
"Interesting, your web site says there are only 19 discs, and QuickBASIC isn't mentioned."
I was corrected immediately. "That's because we updated it sir."
A week later the Platform Archive was at my door.
By this time you obviously guessed that QuickBASIC wasn't included in the Platform Archive. It had 20 discs alright. One was labeled MSDN Library Archive, and the others labeled 1 through 19 matched the web site."
So, I got back on the phone and called customer support again.
I explained that I had a problem with Microsoft's philosophy: they will happily give me the software in Japanese, but not in English. Did this make sense?
Perhaps Microsoft is taking the foreign market the same upgrade ride that we took a decade ago and are just getting around to distributing QuickBASIC. Enough speculation.
The customer support person thought Japanese-only sounded fishy, and after consulting two levels of supervisors admitted that this really appeared to be an oversight. However, as I was reminded, "the purpose of MSDN is to supply the developer with recent development packages."
And I reminded customer support "the purpose of the Platform Archive is to archive things that aren't recent."
I won that one, but the point was moot. Since they didn't have the software, they couldn't help me. So I asked to speak to a supervisor.
The supervisor was less than sympathetic and suggested I call the supplemental parts department. I wasn't aware there was such a thing. Apparently if the dog runs off with your master diskettes, you can replace them.
The supervisor suggested I relay what had happened and Microsoft would send me a free copy.
Good enough. Only the supervisor wasn't going to call over on my behalf. And he wasn't going to transfer me either.
So I dialed supplemental parts and now the fun begins.
So I start to explain how MSDN gave me the Japanese version, not the English version, and am interrupted.
"Sir, if it's an MSDN part, I can transfer you; we don't deal with MSDN."
"No." Frustrated, I explain it is not an MSDN part because they "Could you look for QuickBASIC?" While she's checking, I proceed to explain how MSDN neglected to put it on the CDs and how supervisor said to call this number and get a free copy. I'll spare you the details of needing to explain things twice about how it doesn't make sense to ship a product in one language, but not the original.
I hear paper shuffling, and then she comes back on line explaining she can't find it. "Sir, I only deal with orders for replacement parts that aren't MSDN."
Time to change tactics. "Oh, sorry, I didn't know that..." After a brief pause, "I'd like to order QuickBASIC 4.5 please."
"I'm sorry sir, I can't sell you software. I can only replace parts."
Time to change tactics again. Apparently this maneuver can be done repeatedly without drawing attention to oneself. "Hi, I'd like to know what I need to do to replace my copy of QuickBASIC 4.5..."
Instantly she finds it. "I was looking under the MSDN list. QuickBASIC 4.5?" There was a slight pause, just enough to cause worry in only the way customer support personal can. "I can't give it to you...."
"Why not?" I ask, conveying the disappointment in my voice.
"Because it's on 5.25" floppies, and customers don't want those. This is a decade old product sir."
"Look, I don't care if they're written on stone tablets. I want it."
"That will be $15 sir, now all I need is a part number."
Apparently this is some precursor to serial numbers, validating that I really owned the product.
I tell her I don't have the part number. I explain that so far over $2,000 has been spent on MSDN because I thought the $15 software package from seven years ago would be on it.
Now she explains that, even though she has the software, she can't give me the software without a part number. Now she wants to know how I had a copy of it.
So I explain, very slowly what might have happened:
"I've been hired by a software company that produced a legacy application they want to make sure is year 2000 compliant." These were two buzzwords she recognized as important, thank goodness for company memos.
"The problem is their office burned down and they got the source code from the safe deposit box. Now I'm ready to fix it, but have no compiler."
Suddenly she's more sympathetic, but still wants a part number. "Do you have it off the program?"
No, I explain that I have the source code. Not the compiler.
"Do you have the original disks?"
No, if I had those, I wouldn't be calling.
"Do you have the manual?"
I can't read the number it through the ashes.
Finally realizing I'm not going to hang up, she orders me a copy. Only I'm not getting it for free. It costs me $15.
And $5 shipping.
And 92 cents in taxes.
"It will arrive in 3 to 5 business days... oops."
Now what?
"We may not have it in stock..."
"What?"
"It turns out this is a VERY old program and not only is Microsoft not supporting it, they aren't producing it any more." She goes on hold and checks.
Hmm, I'm thinking... just the reason to put it on a platform archive.
Minutes later: "Sir, we have 12 copies left."
"In all of Microsoft?"
"...in all of Microsoft."
She sent the order out, bless her heart. And it arrived days later.
QuickBASIC does not like to install on WinNT. But I managed to coax it by uncompressing each file by hand.
Incidently, it didn't fix the problem. Total time: three weeks.
Microsoft is no longer supporting the language, so the problem will most likely forever persist. ...on Windows.
As for not first thinking of Linux for this job, I don't know what to say. Within seconds of checking AltaVista, www.basmark.com appeared with QuickBASIC for Linux at less than 1/10th the cost dropped on MSDN. Other solutions were available as well, including QuickBASIC to C translators.
Regarding Linux support, I'd like to thank everyone who's involved with open source, everyone who maintains archive ftp sites, and everyone who answers questions on news groups. Because of you, I and others don't face this level frustration when working with Linux.
[ Footnote: You may find this amusing. When firing up an old 386 DOS machine to run Linux, I found *my* copy of QuickBASIC 4.5. Sure enough, in my archives were floppies... I forgot I had 'em. D'oh! ]
Kernel modules provided support for a lot of functions within Linux. Unfortunately, I wasn't able to find a simple explanation of what they are and how to use them when I needed it.
Last summer, I installed Red Hat Linux 5.1 on my ThinkPad 770 portable computer. I used to be a Unix System Administrator and had installed Red Hat 4.1 on a computer a few years ago, so I expected to be able to solve any problems I encountered during the installation. The initial installation went smoothly.
I was taken aback to discover that I couldn't set up dialup network services. The configuration files were all there, but the system didn't seem to have kernel support for networking. I couldn't find any explanation in the documentation I had, but after digging around in Deja News (http://www.dejanews.com/), I learnt that kernel modules provided network support.
As operating systems evolve and grow over time, the designers of the system face a dilemma. If support for all possible functionality is included within the operating system kernel, the core program that controls the system, the kernel becomes very large and unwieldy. If support for the functionality is not included in the kernel, the functions will either work too slowly or won't work at all. Operating system designers typically solve this dilemma by modularizing support for functionality that can then be included or left out.
Traditionally, there are two ways to provide this modularity. The designer can separate functionality into separate processes called threads or the kernel can be recompiled to include/exclude any functions (not) included by the vendor. If the functionality is separated into threads, the kernel is called a micro-kernel. This solution imposes communications overhead as the threads coordinate their work. A kernel that has all of its functionality included when it is built is called a monolithic kernel. As the name implies, the downside of this solution is the size of the kernel. Linux' solution was to include kernel modules which can be loaded and unloaded on demand. This minimizes both kernel size and communication overhead.
A Linux Journal Review: This is an updated version of an article which appeared in the December 1998 issue of Linux Journal.
Well, the long wait is over and 2.2.0 has finally appeared for the masses. For the sake of history, Linux 2.2 was officially released on 1/25/99. As of this writing, the mainstream press has not caught on to the release so it is hoped that this will not get out too late to be useful to those folks. At this time, no distributions have announed dates as to when they will begin shipping 2.2.x kernels but it is reasonable to expect that there will be mainstream 2.2.x options by March.
Submitted for your approval, my final i386 change summary. (I've now had three separate "final" versions, but I really mean it this time.) This document is intended as an expanded laundry list of new features and additions to the 2.2.x kernel, a major milestone in the history of Linux. Please note that this document does not cover all the new hardware that Linux supports. Many devices, such as scanners and printers, are handled exclusively in user space. Other devices, such as video cards and mice, are handled by a combination of user and kernel drivers. If you don't see a device class that you are interested in listed in this document, it is quite likely that Linux 2.2 supports it -- just not necessarily using the kernel to do so.
Also, I do not claim that everything in this document is PC. I believe that I am being fair and I have pulled some puches with respect to how I phrased certain portions. If you think that I should reword a certain portion so as not to offend someone, let me know but I will not make any promises.
1) Chips Galore
The world of Intel chips is a fast and interesting thing to follow, if you have nothing better to do. Merced, Celeron, MMX... the names of Intel technologies float past to be replaced by new cutting-edge technology. (Whether or not these technologies are worthwhile is a matter that I'm not even going to begin to try and debate.) In addition, AMD, Cyrix, and other companies have become solid competitors in the market and each have their own little optimizations, quirks, and bugs. It's a mess, to say the least.
Linux 2.2 will be the first stable Linux to support options for the various non-Intel processors in the kernel configuration tool. Perhaps even more importantly, Linux 2.2 (and later revisions of 2.0 for obvious reasons) supports bugfixes and workarounds for widespread processor bugs including the infamous F00F Pentium bug. Other bugs that can't be worked around, such as an AMD K6 sig11 bug, are reported during startup.
Merced hasn't arrived yet and probably isn't immediately forthcoming, but Linux 2.2 has already been ported to Sparc64, Alpha, and other 64-bit platforms so the infrastructure for a 64-bit native kernel is already happily in place. (There are, of course, other obstacles that would have to be overcome before Linux/Merced could be released but having a 64-bit ready kernel is an important step.)
Multiple-Processor machines now will operate much more efficiently than they did in Linux 2.0 with issues such as the global spinlock removed. Up to 16 processors are supported (the same as with 2.0) but the performance difference should be amazing. Also, there is now greater support for the IO-APIC on Intel boards that will make SMP generally better supported. And finally, it is now possible to specify a multi-processor configuration without ever leaving the kernel configuration tool.
In terms of other ports, Linux 2.2 will feature improved support for a large number of 'workstation' machines such as Sparc, Sparc64, and Alpha machines. As for 'desktop' machines, Linux 2.2 has been ported to Motorola's m68k and PPC processors and now can be expected to run on many of these platforms, including the Macintosh. (with varying degrees of hardware support, of course. Support for m68k Macs in particular is not ready for prime-time.) Linux is also moving to processors, such as ARM that are increasingly popular for embedded systems.
On somewhat of a tangent, there is continuing work to support a subset of the Linux kernel on 8086, 8088, 80186, and 80286 machines. This project will never integrate itself with Linux-proper but will provide an alternative Linux-subset operating system for these machines.
In terms of memory consumption, the average Linux 2.2 setup will require more memory than Linux 2.0. (Although a larger number of components can now be modularized or compiled-out to allow a system administrator more flexibility if memory is tight.) There is some debate as to what is the lower limit in terms of functionality with a text-only system but it should still be possible to have only 4 megs of RAM in many situations. (8 megs are still recommended.) On the bright side, Linux 2.2 includes a number of new optimizations that should actually improve the performance of machines with at least 16 megs of RAM. The more, the merrier.
2) System Busses and Assorted Ilk
Although somewhat less crucial and cutting edge, Linux 2.2 will support a larger proportion of the existing x86 computers with the addition of complete support for the Microchannel bus found on some PS/2s and older machines.
In addition to hundreds of minor patches to the bus system (including many new PCI device names), larger improvements have taken place. The PCI subsystem, in particular, has undergone several major changes. Firstly, the PCI device reporting interface has been changed and moved to allow for easier addition of new information fields. This particular change doesn't spell much of a difference for an end user but it makes the lives of developers much easier. Additionally, it is now possible to choose whether you want to scan your PCI bus using your compatible PCI BIOS or through direct access. This allows Linux 2.2 to work on a larger set of machines as several PCI BIOSes were incompatible with the standards and caused booting problems.
Sadly, there is still little kernel support for Plug-and-Play ISA devices. While that would be a great addition, there are some problems with the currently proposed systems that will need to be resolved sometime in 2.3 before inclusion. Fortunately enough, there happens to be a great user-level utility, isapnp, for setting up PnP devices that requires just a tad more work than we'd like but gets the job done in true Linux fashion.
Laptops and many workstations can also benifit from improved support for power management, including worksrounds for a number of incompatible BIOS implementations. Also new in 2.2 is the ability to use some functions of an APM BIOS on multi-processor systems.
3) IDE, and SCSI, and USB... Oh my!
As far as Linux IDE is concerned, not much obvious has changed for Linux 2.2. The most obvious change is that it is now possible to load and unload the IDE subsystem as a module, just like SCSI. (This also has the added bonus of allowing one to use a PnP-based IDE controller.) For less bleeding-edge machines, the updated IDE driver now supports older MFM and RLL disks and controllers without having to load an older version of the driver. Linux 2.2 now also has the ability to detect and configure all PCI-based IDE cards automatically, including the activation of DMA bus-mastering to reduce CPU overhead and improve performance. And finally, more drivers have been developed for controllers that are buggy or simply different. It's amazing how even excellent things can continue to get better.
Elsewhere in the IDE world, parallel port IDE devices have become more common and are now supported by Linux 2.2, for the most part. It is a good assumption that many devices that are not supported currently will be added as 2.2 progresses.
The SCSI subsystem's main improvements have been the addition of many new drivers for many new cards and chipsets. Too many, in fact, to even begin to name here.
PCMCIA adapters (or PC-card slots, as they are called now) are not supported in the standard Linux 2.2 but are supported by an external module provider. Thus, while not in the kernel, PCMCIA support will be included in most distributions.
IRDA support has also been added to the kernel although many controllers are not yet supported. As this feature was added only in the closing days of Linux 2.1 development, it may not be as generally usable as other, more mature, portions of the kernel.
Alas, there is some bad news here. Despite ongoing efforts by several parties to finish USB support, no support was included in time for a Linux 2.2 release. Several prominent developers have looked at USB support and it is likely that there will be some support before we get too far into Linux 2.2.x. (Alternatively, USB support could be provided through an external source in the same way that PCMCIA support is now.)
4) Ports: Parallel and Serial
Nothing much new on this front, Linux has always had incredible support for these basic building blocks. The parallel port driver has been rewritten with cross-platform issues in mind and thus what was once just a 'Parallel Port' is now a 'PC-Style Parallel Port' Functionality-wise, the only obvious change is that you can now effortlessly share a single parallel port device with multiple device drivers. (Note however that the naming convention used to label parallel ports has changed so you may find that your lp1 has become your lp0. Distributions should allow for this change automatically however.)
Serial support is chugging along as well as it always has but with one notable difference. Previously, a serial device such as a modem involved two devices, one for call-in and one for call-out. (ttyS and cua respectively) As of Linux 2.2, the two are combined in one device (ttyS) and accessing the cua devices now prints a warning message to the kernel log. On the bright side, Linux 2.2 includes support for having more than 4 serial ports, it allows serial devices to share interrupts, and it includes a number of drivers for non-standard ports and multi-port cards. My only complaint with serial support is its lack of support for the standard methods for modules to pass device parameters at module-load time via the modules.conf file and kmod. (Instead, these parameters are set using the 'setserial' command. Somewhat yuck.)
It should also be mentioned that Linux 2.2 will support newer UART chips than 2.0 which may translate into higher transfer rates using newer modems.
5) CD-ROMs, Floppies, and removable media
Thankfully, the hodge-podge of hundreds of CD-ROM standards has solidified behind the 'standard' of ATAPI CD-ROMs. This reprieve has given developers time to completely rewrite the CD-ROM driver system to be more standardized in terms of support. Small, quirky differences between the individual drivers have now all been fixed for better support.
Rewritable CD-ROMs aren't supported nearly as well as one would like, unfortunately. SCSI CD-ROMs are well done (and most IDE drives use SCSI-over-ATA, the SCSI-emulation driver). With other rewritable CD-ROMs, your mileage may vary.
Floppies are working as well as ever. There are new developments in terms of large volume floppies and it remains to be seen whether or not all of these will be supported. Those devices that communicate using ATAPI (a large number of them, actually) are already supported to some degree.
IOMEGA's zip drive, an increasingly popular storage solution, is fairly well supported under Linux 2.2. These beasts come in three versions: SCSI, ATAPI (IDE), and Parallel. Under SCSI and ATAPI, the Zip drives are supported just as any other disk would be. The parallel version of these drives actually use a sort of SCSI-over-parallel protocol that is also supported in Linux 2.2. (Other IOMEGA solutions such as DITTO drives may also be supported using the ftape drivers.)
DVD drives are already supported, to some degree, under Linux as they represent themselves largely as ATAPI drives. (SCSI DVD drives may not, but they will probably work using the excellent SCSI CD-ROM driver.) Unfortunately, this does not necessarily mean that all will be rosy in the Linux/DVD world as Linux does not currently support any DVD-centric filesystems that have been proposed nor are any user-space tools developed to display DVD movies and etc. Once the standards stabilize a bit, it is highly likely that the requisite parts will be added to the Linux kernel sometime during the 2.2.x cycle, following the initial release.
Other removable media may or may not be supported under Linux 2.2. If the device connects through the parallel port, it is possible that it is supported using one of the Parallel Port IDE device protocol modules that are included in the kernel.
6) Glorious Sounds!
At long last, the sound code has been partially rewritten to be completely modular from start to finish. Distributions will be able to more easily include generic sound support out-of-the-box for their users as well as making it easier for the rest of us to load and configure sound devices. (Especially pesky Plug-and-Play ones.) Lots of new sound devices are supported as well and it looks like this is one area where Linux will really improve in the next year.
One very notable defect here is the remaining lack of support for the PC internal speaker, if only for completeness. Then again, Windows 95/98 doesn't do it either so who am I to judge?
7) Video4Linux
Linux 2.2 now has amazing support for a growing number of TV and radio tuner cards and digital cameras. This is a truly bleeding edge addition to 2.1's roster so there may still be some outstanding issues but it is reasonable to assume that they will be fixed in time. In my humble opinion, this is just an amazing area for Linux to be in at all.
8) Back me up, Scotty!
Linux 2.2's backup and tape device subsystem has not changed much since the 2.0 release. More drivers for devices have been written, of course and substantial improvement has been made for backup devices that work off of the floppy disk controller (including the IOMEGA DITTO).
Rewritable CD-ROMs have become a popular solution for backing up data and they are supported under Linux 2.2 There are still outstanding issues in this regard, see my note above on CD-ROMs for details.
9) Joysticks, Mouse, and Input Devices
Joysticks are better supported in 2.2 including a large number of new joysticks and joysticks with an inordinate numbers of buttons. Likely, your joystick will work under Linux 2.2.
Mice in 2.2 aren't really different from mice in 2.0. (As in 2.0, there are some inconsistencies regarding mouse support that will be addressed in the future. For the most part, mouse control is provided through a daemon external to the kernel. Some mouse drivers however deliberately emulate a Microsoft standard mouse. The reasoning behind this is obvious but it would be nice if it was decided on in one way or the other.) It should be noted that, while not solely a kernel issue, mice with Microsoft's spinning wheel extension are supported in recent versions of the XFree86, Linux's most popular GUI. (However many Linux applications have not been designed to take advantage of this feature.)
Additionally, several other input devices are now supported under Linux 2.2 including some digitizer pads. If your devices emulates a mouse (as many do) then it is already supported by Linux 2.2 (and, in fact, Linux 2.0).
10) Video
Perhaps the most surprising and cutting-edge addition to the Linux kernel version 2.2 is what is called the 'frame-buffer console' driver (or 'fbcon', for short.)
Previously, the Linux kernel (for Intel-based machines) only understood and manipulated the video devices in text mode. Graphical support was to be provided by two other systems: 'svgalib' for console-based graphics, and a specialized X Server for window-based graphics. This kludgey system often required configuration information to be repeated and each system supported only a limited slice of the myriad of video devices in common use.
Since this addition is rather new, it remains to be seen whether it will truly replace the previous and long-standing duality. Unfortunately, it could be nearly a year after Linux 2.2 ships before this new system could be robust enough to support the cards and technologies that we already take for granted as working. My personal opinion is that this is the right idea, but I'm going to withhold judgment until we see exactly how far Linus and the developers decide to take this feature.
As an added side-effect of this new feature, primitive multi-heading has been added into the kernel for some devices. Currently, this is limited to some text-mode output but it is reasonable to assume that this very new addition to the Linux kernel will mature somewhat during the 2.2.x and 2.3.x cycles.
It should also be mentioned that it is now possible to remove support for 'virtual' terminals as provided by the kernel. This allows very memory-conscious people to save just a tad more.
Although unimaginable to the desktop user, Linux can now work even better on systems that do not actually include any sort of video device. In addition to being able to log in over serial or networked lines, as Linux 2.0 and previous Linuxes allowed, it is now possible to redirect all the kernel messages (usually sent to the console directly before any hardware was initialized) to a serial device.
11) Networking: Ethernet, ISDN, and the lowly modem.
I don't have a huge amount of experience here; I've been using the same network cards in all my machines for several years. But, it doesn't take an Alan Cox to see that the number of supported Ethernet and ISDN devices supported in Linux 2.2 has risen sharply. I have been told that newer solutions such as cable modems are supported, also.
My only gripe in this regard is the continued non-support of so-called 'Winmodems.' Not that I blame Linux for their absence, making modems that are 80% software is just a dumb idea anyway, but the idealist in me hopes that some day these pesky devils will be supported like their less stripped cousins.
12) Amateur Radio people are Linux people, too.
Since before Linux 2.0, Linux has been one of the few desktop OSes to include native support for computer-based amateur radio people. (Not that I actually know what that entails but it seems to be a more popular option outside the US.) Linux 2.2 adds support for NetROM and ROSE amateur radio protocols. The basic AX.25 layer has also been materially enhanced.
13) Filesystems for the World
Linux 2.2 has a wide array of new filesystems and partition types for interconnectivity. In addition, many of Linux's supported filesystems (including those I haven't listed here) have been updated with a new caching system to markedly improve performance. (In fact, not updating the drivers wasn't even an option if one wanted them included in Linux 2.2.)
For the Microsoft nut, Linux will now read NTFS (Windows NT) drives and Windows 98's FAT32 drives (also used by some later versions of Windows 95). Linux 2.2 also understands Microsoft's Joliet system for long filenames on CD-ROMs. And finally, Linux also understands a new type of extended partition that Microsoft invented. Drivers to read and write Microsoft and Stacker compressed drives are being developed but not yet included in the kernel. There is continuing work with NTFS to allow for both reading and writing, but that support is still experimental.
For Mac connectivity, a HFS driver for reading and writing Mac disks has been included. HFS+ and MFS (ancient floppy format) are not yet supported. Macintosh partition tables can now also be read by the kernel; this allows Mac SCSI disks to be mounted natively.
Sadly, OS/2 users will still not be able to write to their HPFS drives. Some updates have been made to the HPFS driver to support the new 'dcache' system but not the complete overhaul that some were hoping for. There is ongoing work outside the kernel to include read/write support in this driver but those changes did not make it into the initial release of 2.2.0.
If there are any Amiga users left (and there are), they will be pleased to know that the FFS driver has undergone some minor updates since 2.0. This is especially useful as the new generation of PPC Amigas will continue to support this format.
For connectivity to other UNIXes, Linux 2.2 has come forward in leaps and bounds. Linux 2.2 still includes the UFS filesystem which is used on BSD derived systems, including Solaris and the free versions of BSD. Linux 2.2 can now also read the partition table formats used by FreeBSD, SunOS, and Solaris. For SysV-style UNIXs, Linux 2.2 features a somewhat updated version of SysVFS. Linux 2.2 can also read the Acorn's RiscOS disks. And finally, Linux 2.2 features a somewhat updated version of the ever-popular Minix filesystem, which can be used for small drives and floppies on most UNIXes. With so many incompatible formats (and Linux 2.2 reading so many of them), it's amazing anyone ever got any work done.
In other news, support for 'extended' drives (the format used by much older versions of Linux) has been removed in favor of the 'second extended' filesystem. (This shouldn't matter to many people, 'ext2' is far superior to its predecessor.) With the increased support of initial ramdisks, a 'romfs' has been created which has very minimal overhead.
While not quite a filesystem, Linux 2.2 includes enhanced support for stretching a filesystem across several disks transparently. At present, this support can be used in RAID 0, 1, 4, and 5 modes as well as a simple linear mode. 14) Networking II: Under the Hood
On the protocol front, a lot has happened that I simply don't understand completely. The next generation Internet protocol, IPv6, has made an appearance. SPX, a compliment to IPX is new, as well. DDP, the protocol of choice for older AppleTalk networks has also been improved. And, just as you would come to expect by now, the existing protocols have been improved, as well. I only wish I had the need to use some of this stuff...
On the low-end front, not much has changed. PPP, SLIP, CSLIP, and PLIP are all still available for use. I guess some things don't need much improvement. (Although each of those drivers have been updated in one way or another.)
The list keeps going, however. Linux 2.2 will have an excellent new networking core, new tunneling code, a completely new firewalling and routing system called 'ipchains', support for limiting bandwidth consumption, and a ton more. It's just amazing. I wish I could keep track of it all. (But, who am I kidding?)
It should be noted that file and printer sharing protocols have also been improved and markedly enhanced. SMB, the protocol for accessing Windows-based shared filesystems has been somewhat improved with bugfixes and the like. If you are a fan of NetWare (some people are...), you'll be happy to know that Linux 2.2 supports a large number of improvements in this area, including access to two different kinds of NCP long file names. Trusty NFS has also been improved, both at the server level and the client level. And finally, those eggheads over at CMU have been hard at work developing the new distributed network filesystem, Coda. This filesystem supports a large number of highly-requested features including disconnected operations for laptops, an advanced cache system, and security improvements.
On somewhat of a tangent, Linux 2.2 also includes a driver which will allow one to share (and remotely mount) whole disk images over a network.
15) Not Everyone Speaks English.
Linux 2.0 is a very international OS with support for international keyboards and the like. Linux 2.2 adds to this and other internationalization features the ability to load some Microsoft/UNICODE codepages for translating filenames into Linux's native system. (Which is UTF8, another encoding of UNICODE) Currently, the only filesystems that use these translations include Microsoft's VFAT and Microsoft's Joilet ISO 9660 (CD-ROM filesystem) extension.
16) Unix98: The Next Generation
Linux 2.2 will be a more 'standard' UNIX in a number of ways. The most pronounced of these ways to the end user will be the addition of UNIX98-style Pty devices using a new filesystem (devpts) and a cloning device to provide the functionality.
17) And, finally...
In addition to those noted above, there are a large number of other drivers and things that just don't fit in anywhere but should still be noted. So, in no given order, the oddball updates of Linux 2.2:
The loopback driver, which allows disk images to be mounted and manipulated just like any regular drive, has been improved in a number of ways. Of these improvements, the most notable difference to users will be its increased support for encryption and the mounting of encrypted hard disks and disk images.
A driver for accessing your computer's CMOS memory has also been provided in Linux 2.2 which may be useful in some applications. (Sadly, a similar driver to access your BIOS's flashable RAM did not make it, it will still be necessary to boot from a DOS floppy to flash your computer's BIOS to a new version.)
And finally, in the past, Linux used a half-user/half-kernel method of loading in and out drivers (called 'modules') called 'kerneld' This method was good but inefficient. Linux 2.2 has removed kerneld and replaced it with a smaller all-kernel solution called 'kmod'.
This is the 'revised millennium penguin' version of this document (1/26/99) and is really just a minor update over the last three final versions. Linux 2.2 is out now, so obviously no new features will be added and I should be safe.
As always, I can be reached at jpranevich@lycos.com.
Thank you, and Good Night.
Joseph Pranevich
Thanks to all our authors, not just the ones above, but also those who wrote giving us their tips and tricks and making suggestions. Thanks also to our new mirror sites.
I'm sick of the impeachment trial and wish a way could be found for the Senate to get on with "business as usual" without the Republicans feeling like they have lost face. Of course, in my opinion they already have. All the polls show the American people don't want Clinton impeached and want the Congress to get on with their "real work". For the Republicans to completely ignore this and go forward with the trial, all the time saying the American people are apathetic, shows an incredible arrogance on their part. They obviously think their constituents have no intelligence and no opinions worth listening too.
I voted for Clinton but I wouldn't vote for him again. His complete lack of judgement in his personal life is appalling. However, I don't think he has done anything that can be called a "high crime".
Enough of the soap box.
Have fun!
Marjorie L. Richardson
Editor, Linux Gazette, gazette@ssc.com
Linux Gazette Issue 37, February 1999,
http://www.linuxgazette.com
This page written and maintained by the Editor of Linux Gazette,
gazette@ssc.com