Linux Gazette... making Linux just a little more fun!
Copyright © 1996-98 Specialized Systems Consultants, Inc.
_________________________________________________________________
Welcome to Linux Gazette! (tm)
_________________________________________________________________
Published by:
Linux Journal
_________________________________________________________________
Sponsored by:
InfoMagic
S.u.S.E.
Red Hat
LinuxMall
Linux Resources
Mozilla
Our sponsors make financial contributions toward the costs of
publishing Linux Gazette. If you would like to become a sponsor of LG,
e-mail us at sponsor@ssc.com.
Linux Gazette is a non-commercial, freely available publication and
will remain that way. Show your support by using the products of our
sponsors and publisher.
_________________________________________________________________
Table of Contents
July 1998 Issue #30
_________________________________________________________________
* The Front Page
* The MailBag
+ Help Wanted
+ General Mail
* More 2 Cent Tips
+ Producing a Resume in PDF with LaTeX
+ UNIX System man Pages
+ ext2 Partitions
+ Re: bpp 16 Question
+ Network Cards
+ Tip for using Windows 95 buttons in KDE
+ PPP, SLIP and Other Remote Service
+ News Bytes
o News in General
o Software Announcements
+ The Answer Guy, by James T. Dennis
+ CHAOS: CHeap Array of Obsolete Systems, by Alex Vrenios
+ Clueless at the Prompt, by Mike List
+ 8 Reasons to Make the Switch, by Bill Bennet
+ Integrated Software Development with WipeOut, by Gerd Mueller
+ Install New Icons in Caldera's Looking Glass Desktop, by
David Nelson
+ Installing Microsoft & Linux , by Manish P. Pagey
+ Linux Expo
o Linux Expo a Smashing Success!, by Norman M. Jacobowitz
o Linux Expo Editor Wars!, by Eric S. Raymond
o The Fourth Annual Linux Expo, by David Penland
+ LinuxCAD Impressions, by Robert Wuest
+ Book Review: A Methodology for Developing and Deploying
Internet & Intranet Solutions, by Jan Rooijackers
+ New Release Reviews, by Larry Ayers
o The Blackbox Window-Manager
o Lesstif: One User's Impressions
o Sabre: An Svgalib Flight Sim
o SFM: A New GTK-Based Application
+ Portable GUI C++ Libraries, by Sean C. Starkey
+ Using Linux Instead of an X Emulator, by Al Koscielny
+ USENIX 1998, by Aaron Mauck
+ The Back Page
o About This Month's Authors
o Not Linux
The Answer Guy
The Graphics Muse Will Return
_________________________________________________________________
TWDT 1 (text)
TWDT 2 (HTML)
are files containing the entire issue: one in text format, one in
HTML. They are provided strictly as a way to save the contents as one
file for later printing in the format of your choice; there is no
guarantee of working links in the HTML version.
_________________________________________________________________
Got any great ideas for improvements? Send your comments, criticisms,
suggestions and ideas.
_________________________________________________________________
This page written and maintained by the Editor of Linux Gazette,
gazette@ssc.com
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
The Mailbag!
Write the Gazette at gazette@ssc.com
Contents:
* Help Wanted -- Article Ideas
* General Mail
_________________________________________________________________
Help Wanted -- Article Ideas
_________________________________________________________________
Date: Wed, 03 Jun 1998 11:05:23 +0100
From: Maurizio Ferrari, Maurizio.Ferrari@tin.it
Subject: Photogrammetry tools for Linux?
I am looking for a Linux program to do some close-range
photogrammetry. Close range photogrammetry is a technique that enables
to reconstruct 3D images from a series of 2D pictures. There are a few
powerful (and relatively inexpensive) tools for Windows but none so
far for Linux, that I know of. There was something once upon a time
called Photo4D. Despite my massive Internet search, any occurrence of
Photo4D seems to have been wipe erased from the face of earth. It is
listed in SAL but all the links fail.
I don't want to resort to buy and use Windows software for this. Help,
anyone?
Maurizio
_________________________________________________________________
Date: Sun, 07 Jun 1998 11:36:33 -0500
From: Mike Godwin, mgodwin@socket.net
Subject: Searching (somewhat in vain) for sources on shell scripting
I recently came across an excellent mini-howto on overcoming some of
the pitfalls of having a dynamic IP address
(ftp://sunsite.unc.edu/pub/Linux/docs/HOWTO/unmaintained/mini/Dynamic-
IP-Hacks).
Reading this document has refueled my desire to learn shell scripting,
sed rules and the like. My search of the Internet for information on
these topics has, however, been fruitless.
I would be most grateful if someone could point me to a good shell
scripting tutorial or book.
Thanks in advance.
Mike
_________________________________________________________________
Date: Fri, 5 Jun 1998 22:58:11 +0200
From: "Himbeergarten Hummel", himbeergarten.hummel@nanet.at
Subject: X Window System on a monochrome notebook
I've a 486dx notebook with a monochrome display what shall I do to
make X windows run?
Himbeergarten Hummel
_________________________________________________________________
Date: Tue, 09 Jun 1998 13:06:28 PDT
From: "Dave Stevens", davestevens@hotmail.com
Subject: kudos
I think the Coldiron article on replacing NT with Linux is the best
thing I've seen in the gazette. Congratulations. More such articles
are needed. I am especially interested in an article explaining why
Linux doesn't come with a "system requirements" box on the package (no
package??). Seriously, though, I am a computer dealer and have many
times advised people to buy their application software first then buy
a computer that will run that package. If I tell my customers to go
out and buy a 386 with 16 MB of ram and a half MB video card and a 200
MB hard drive, they will think I am [characterization deleted!] in the
head. And maybe they'll be right. How much difference does the
underlying hardware make to the user of an X application, and how can
I assess (for them) the varying cost effectiveness of a faster
processor versus more RAM versus a SCSI disk versus just a bigger IDE
disk. Maybe you can commission an article like this. (Don't even THINK
of asking me). Someone of your loyal readers must have relevant
experience to write up.
Great magazine, keep up the good work. If ever you find yourself in
northern BC I will happily buy you a beer.
Dave Stevens
_________________________________________________________________
Date: Fri, 12 Jun 1998 08:49:05 -0700 (PDT)
From: Renato Weiner, reweiner@yahoo.com
Subject: Suggestion for Article
Recently I was looking at the Gazette and I think I have a good
suggestion of an article that will be very useful for the Linux
community.
I have had some technical difficulties of having two simultaneous
versions of Kernels in my system. I mean a stable one and a developing
one. I searched the net looking for information of how to co-exist
both but it's completely fragmented.
If somebody more experienced could put all this information together,
it will certainly help a lot of people from kernels developers to
end-users.
Thanks a lot for your patience.
Renato.
_________________________________________________________________
Date: Tue, 16 Jun 1998 10:42:06 +0200
From: Carlo Vinante, vinante@igi.pd.cnr.it
Subject: Printing Problems
I've just updated to Red Hat 5.0, and I cannot print anymore documents
using Ghostview, or LyX or whatever. Tests are OK. Have somebody a
suggestion ?
Carlo Vinante
_________________________________________________________________
Date: Mon, 15 Jun 1998 15:46:35 +0200 (MET DST)
From: Sara Briganti mat.1510, briganti@CsR.UniBo.IT
Subject: Information
We are 4 Italian students and we're just have a look about ELM's
sources. We have a lot of problems about these...
Could you ELM us? Do you know any interesting site about how ELM
works? And about sendmail?
Thank you a lot. Bye.
Sara, Elsa, Michele, Livio
_________________________________________________________________
Date: Sat, 13 Jun 1998 22:24:47 +0200
From: Daniele Verzelloni, dverzel@tin.it
Subject: Network configuring
Help me in configuring Red Hat Linux about networking. I've a ISDN
Adapter by Asuscom that I use for Internet in Windows95 and I can't
configure it! I've even got an Ethernet adapter to go to another
computer and in the same way I can't configure it! Thank you and sorry
for my bad English, I'm Italian.
Daniele
_________________________________________________________________
Date: Thu, 18 Jun 1998 23:12:30 +0200
From: Eric CANAL, Eric.Canal@supelec.fr
Subject: a question
I've recently bought a CD-ROM recorder I would like to know if it is
legal to make a Red Hat CD distribution for my own use. My idea is to
copy the FTP distribution on a CD and to install it. I've tried but it
tells me that I don't have a Red Hat CD-ROM. Do I miss a particular
file?
thanks for your answer and BRAVO for your Gazette :)
a French reader, Eric Canal
(Better check with Red Hat about legalities. --Editor)
_________________________________________________________________
Date: Tue, 23 Jun 1998 23:54:20 -0700
From: Ruth Milne, rmilne@mail.bulkley.net
Subject: article idea
I have been reading a lot of speculation about whether Linux can ever
displace Microsoft on the desktop. In the course of wading through a
lot of hype I haven't seen much actual experience reported about an
ordinary computer user installing Linux on their PC. I don't mean
someone who is already a Linux enthusiast and I don't mean someone
with a computer science degree either. Just an ordinary computer user
with an IQ bigger than a shoe size, sitting down with a brand new
Intel box and a Red Hat 5.1 package, say, and going through the hoops
up to the point where X starts up okay and the modem is a working
Internet device. This ought to be compared to such a person doing the
same operation with a new box and a copy of W98. I think that would
make a useful comparison.
Dave Stevens
_________________________________________________________________
Date: Thu, 25 Jun 1998 03:32:11 EDT
From: RangeScale@aol.com
Subject: Need older Linux
Okay, I am pretty new to Linux and am trying to learn it. The main
problem is, is that I always have my desktop tied up doing more
important things, and also don't have the room on it to hold Linux. My
solution is to pull out my old 286 laptop (old but very good) and use
that to start learning Linux. My big problem, though, is finding a
version that will run on that. I have the Debian 1.3, but min reg. are
386+. Is there a ver. that will run on 286 - and where can I get it?
_________________________________________________________________
Date: Sun, 28 Jun 1998 00:47:14 +0200
From: B.L.Michielsen, BMichielsen@csi.com
Subject: Communication Problem
I have a problem communicating with Compuserve through Seyon since I
installed a 16650A serial card on my Dell 486DX2 66MHz running RedHat
4.1 Kernel 2.0.17. and a USRobotics SportsterMessagePlus
modem. Before, I used a 14.4 Hayes compatible modem connected to a
serial port with a 16450 IC, in that configuration everything was slow
but OK. I am connecting to a Compuserve server with baud rates to
28.800bps. The characters in the Seyon terminal form unreadable
garbage, and I cannot find out how to parameterize the connection to
get it right. To complete the information, when I make a ppp
connection to a 56kbps server of Compuserve and use Netscape
communicator, everything runs perfectly well, so I guess the Seyon
problem is not related to kernel parameters but rather to xterm?
Any help would be greatly appreciated.
Bas L. Michielsen
_________________________________________________________________
General Mail
_________________________________________________________________
Date: Tue, 02 Jun 98 12:19:28 -0500
From: cokeydepercin@pmsc.com
Subject: Article on home networking.
I just read a reply to the home networking article by Mr. Gray and I
agree that home networking is cheap and easy. I disagree somewhat
about the 100baseT. I've just upgraded from 10baseT to 100baseT. The
hub was $100USD for an eight port hub with uplink and the cards were
$30USD (Dec Tulip chip set). I've heard there may be some cheaper NICs
now $20~25USD. My upgrade cost was $250 for 5 machines - 3 Win95,
Linux server, multi-boot Linux/win95/NT - the cable was CAT5 to begin
with. The additional cost of putting in 100 vs 10 is so slight, about
$115 in this case as the cable is the same, that it isn't worth
installing 10baseT. The advantage is that 100baseT and a reasonably
fast Linux machine allows a Win95 machine to access apps almost as
fast (in some cases faster) from the network than from its own drive.
Note that I too build from junk as much as possible and the children's
machines (the Win95 ones) are very low end Pentium and have old slow
small drives than contain only the OS and swap. Everything else is on
the server (install once use many!).
There is a caveat to this of course. 100baseT NICs for ISA machines
are VERY expensive so if you have ISA machines, your only realistic
choice is 10baseT. The one 100baseT ISA NCI I priced (3Com) cost more
than all the PCI NICs for my upgrade.
Just my $0.02 or so. Keep up the good work, I really enjoy the
magazine.
Cokey
_________________________________________________________________
Date: Tue, 02 Jun 1998 15:48:27 +0100
From: Raphael Marvie, raphael.marvie@cs.man.ac.uk
Subject: Comment about LG last review
It took me 3 tries to get the full article about "Replacing NT by
Linux" but I finally did it. I am very pleased to see people from the
"real-world" as they call themselves to admit that Linux can avoid lot
of people using bad softwares. There is only one thing that make me
sad, the only people who are going to read this article are Linux
users.
Is there any solution to make "real-world" people reading such
article? I not talking of a holy war against M$, but I think the worse
thing for Linux and other brilliant systems or soft is that the end
user never heard of this solution.
The fact that Netscape has moved to Open Source Software was a big
advert for the GNU/Linux solutions. I hope we will be able to take
advantage of it to say to managers "Hey, we can do every thing you
want, and in a better way than it is done yet by Micro$oft and Co. You
just have not to think in buying a solution 60,000$ each year for
updates but paying someone 60,000$ a year for building you the exact
solution you need using Open Source Software. Which means for you
having a *personal* *reliable* *IT* solution."
That is the challenge: teach them that a man or a woman is more
important than a soft, because this man or this woman can adapt
(him|her)self to the need of a firm, and is more important for the end
user as a spring of information than a bad-written manual.
Keep on LG, the job you are doing is brilliant.
Linuxly yours, Raphael
_________________________________________________________________
Date: Tue, 02 Jun 1998 13:36:06 +0000
From: Andrew Josey, a.josey@opengroup.org
Subject: Web resource - UNIX 98 Spec online
With the recent announcements concerning Linux and conformance to the
UNIX 98 specification, I thought it would be useful to send you the
URL where the online specification can be browsed, searched and
downloaded.
Its at http://www.UNIX-systems.org/go/unix/
Perhaps you could include this as a tip in the next Linux gazette.
best regards, Andrew
_________________________________________________________________
Date: Tue, 2 Jun 1998 12:19:44 +1000 (EST)
From: Con Zymaris, conz@cyber.com.au
Subject: Article ideas...
It would be of general interest, and help the linux/open source
community, if people out there were introduced to the concept of
advocating that their local University had its Computer Science
students' major final year projects written as open-source. For
reasons why the students would want to do this, check out:
http://www.cyber.com.au/misc/frsbiz/students.htm
Cheers, Con
_________________________________________________________________
Date: Mon, 1 Jun 1998 16:04:12 -0700
From: "Travis Clark", hilt@telepath.com
Subject: Simple Suggestion
To further Linux in this world of ours, I think it fitting that Linux
Programmers look at two different ways this can be accomplished:
1. Applications - This does not end in Word Processors... Desktop
Publishing systems, a simple database system, Accounting Software,
the whole nine yards. If we focused on software that companies use
at a lower price (or freeware) than Windoze, and comparable or
better performance, then Linux would be more acceptable world
wide.
2. Games - As much as I hate to admit it, Games are a must in this PC
world. There are versions of popular games for linux, but there
are no MAJOR companies designing games for Linux. If we can get a
Doom/Myst/DeerHunter type game specifically designed for Linux,
then Linux will definitely have more interest in the market.
That's my two cents...
Travis Clark
_________________________________________________________________
Date: Mon, 22 Jun 1998 14:50:45 -0400
From: Brian Catlin, Brian_Catlin@BayNetworks.COM
Subject: Suggestions to improve readability
First, I would like to express my appreciation to all the authors for
taking time to write excellent articles.
I do, however, have a suggestion or two that will make the
accessibility of the zine that much better.
As background, I am one of your readers that prints out the zine, then
reads it. It is much easier for my tired old eyes that way, and I also
get a nice resource to use when the screen is cluttered with windows
of different things for the project I am working on.
With that said, I have a couple problems that can be easily solved.
* The first thing is links in the articles. The usual standard one
sees on the net is to put the URL in the body of the article and
then link it. This way us off-line readers can fire up a browser
later and go directly to the site mentioned without having to find
the link in the online version of the article.
* Secondly, and this came up in the latest issue BTW, when giving
source code, config or other text-based examples, please keep them
as text. Putting backgrounds behind the code makes them hard to
read, and if they are in fact graphics, one has to type in the
code by hand. A better way is to delineate it with some sort of
blocking character string and use the appropriate HTML tag to show
it is an example. I tend to use the following to start and stop
sections of code:
#-----------------------------
(Note: it is a pound sign with a bunch of dashes).
This will speed loading into browsers online, allow cut and paste
operations, and ensure readability for the off-line printout
readers. (I know that more people that just I do this!)
Thanks again for a great zine!
Brian
(Okay, one, I'm guessing you are objecting to the practice of using
word instead of the address in the link so the text version only
shows the word and drops the address. I can make sure this happens
in sections that I do myself, but I really don't have time to do it
for every article. I will print your letter and maybe that will
give authors a push in the right direction. Second, I use whatever
the authors send as listings and most do keep them between
tags without backgrounds. Mr. Coldiron article last
month did use backgrounds. His article has been quite popular.
Thanks for writing, --Editor)
_________________________________________________________________
Published in Linux Gazette Issue 30, July 1998
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Next
This page written and maintained by the Editor of Linux Gazette,
gazette@ssc.com
Copyright © 1998 Specialized Systems Consultants, Inc.
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
More 2¢ Tips!
Send Linux Tips and Tricks to gazette@ssc.com
_________________________________________________________________
Contents:
* Producing a Resume in PDF with LaTeX
* UNIX System man Pages
* ext2 Partitions
* Re: bpp 16 Question
* Network Cards
* Tip for using Windows 95 buttons in KDE
* PPP, SLIP and Other Remote Service Support
_________________________________________________________________
Producing a resume in PDF with LaTeX
From: David M. Cook davecook@hotmail.com
Date: Mon, 01 Jun 1998 23:05:24 +0000
LaTeX and the resume.sty package are an easy way to produce a very
attractive resume under Linux. One just needs to fill in the
boilerplate provided. resume.sty is available from any CTAN archive,
such as cdrom.com:
htp://ftp.cdrom.com/.1/tex/ctan/macros/latex209/contrib/resume
However, I've found that windows users are often not familiar with the
usual Postscript output of the dvips program or how to view it.
Luckily, Ghostscript provides the ps2pdf program for converting
Postscript to Adobe's Portable Document Format, which is fairly
familiar to windows users.
However, converted ps documents that were produced from LaTeX source
using the default Computer Modern fonts look very poor when read with
the Adobe PDF reader. The trick is to use the times package, which
changes all the fonts produced by your LaTeX source to one the Adobe
reader can handle. Just include the package like this in your
document:
\documentclass[12pt]{article}
\usepackage{resume,times}
%other preamble commands
\begin{document}
%document body
\end{document}
Some other things worth mentioning here: PStill, another PS->PDF
converter; pdfTeX, which produces PDF instead of DVI files from TeX
input; and finally the TeX User's Group page which has tons of great
links:
ftp://ftp.cstug.cz/pub/tex/local/cstug/thanh/pdftex/
http://www.this.net/~frank/pstill.html
http://www.tug.org/interest.html
--
Dave Cook
_________________________________________________________________
UNIX system man pages
From: Andrew Josey a.josey@opengroup.org
Date: Wed, 03 Jun 1998 10:10:41 +0000
Hello, included is a possible tip for the Linux Gazette.
Ever needed to know what the official UNIX man page for a particular
command or function says? A new web resource from The Open Group is
the Common Access to the UNIX Man Pages, a hypertext html set of
browsable pages common to all UNIX 95 and UNIX 98 branded systems.
To try it out see http://www.opengroup.org/common_access/
--
Andrew Josey
_________________________________________________________________
ext2 Partitions
From: Albert T. Croft acroft@cyber-wizard
Date: Mon, 08 Jun 1998 14:57:03 -0500
I recently ran into a small problem, and I think the results of it
might be helpful to others. I was recently helping out a friend with a
problem on his Linux machine, and we needed to find a
file-unfortunately, neither of us knew where it might've been
installed.
Having both ext2 and vfat partitions, we realized that doing a find
command might take a while, and would probably give some false
results. We knew there might be files with similar names on his vfat
partition-files we were sure were not the ones we were looking for. We
knew the files we were looking for would only be on the ext2
partitions.
We started looking for an answer with the -mount option for the find
command; unfortunately for us, it only looked at files on the same
device as the path given to the find command. (A look at the results
of the mount command shows why that would be a problem for us.)
/dev/hda2 on / type ext2 (rw)
none on /proc type proc (rw)
/dev/hda6 on /home type ext2 (rw)
/dev/hda8 on /tmp type ext2 (rw)
/dev/hda7 on /usr type ext2 (rw)
/dev/hda1 on /win95 type vfat (rw,umask=0111)
We tried writing a batch file, using grep and gawk to get the mount
points for the ext2 partitions and handing them to find. This proved
unworkable if we were looking for patterns, such as h2*. We then tried
to write just a find command, using gawk and grep to get the mount
points. This was somewhat better, but using a print statement in gawk
to get the names of the mount points wouldn't work. Some help came
with remembering that gawk has a printf statement, allow.
Our final product, which we found quite useful and now have in our
.bashrc files as linuxfind, is the following:
find `mount|grep ext2|gawk '{printf "%s ", $3}'` -name
To use as an alias:
alias linuxfind="find `mount|grep ext2|gawk '{printf "%s ", $3}'` -name "
Written this way, other options to the find command can be specified,
such as -perm, -exec and -type. To use it, we simply type something
like:
linuxfind less
linuxfind h2*
linuxfind x* -perm -2000
The only problems we can see with this command so far are (1) if there
are drives mounted at login that are unmounted during the session, the
mount points are still searched, and (2) if a drive is mounted after
login, it is not included unless the .bashrc files is sourced.
--
Albert Croft
_________________________________________________________________
Re: bpp 16 Question
From: Michael Huttinger mhutt.removespam@netnitco.net
Date: Sun, 14 Jun 1998 19:56:41 +0000
In regards to the question on starting X with 16 bitplanes instead of
8 (LG#28)...
I have done the following (assuming you are using XFree86)
Open up and edit your XF86Config file.
Look for the "Screen" section you are using. Add an entry right after
that specifying the default colors of the format:
DefaultColorDepth 16
This will default your screen to 16 bit planes.
My example screen section follows:
Section "Screen"
Driver "accel"
Device "STB Velocity 128"
Monitor "My Monitor"
DefaultColorDepth 16
Subsection "Display"
Depth 8
Modes "1024x768" "800x600" "640x480"
ViewPort 0 0
EndSubsection
Subsection "Display"
Depth 16
Modes "1024x768" "800x600" "640x480"
ViewPort 0 0
EndSubsection
Subsection "Display"
Depth 24
Modes "1024x768" "800x600" "640x480"
ViewPort 0 0
EndSubsection
Subsection "Display"
Depth 32
Modes "1024x768" "800x600" "640x480"
ViewPort 0 0
EndSubsection
EndSection
--
Mike Huttinger
_________________________________________________________________
Network Cards
From: Wari Wahab wari@tecnologist.com
Date: Sat, 13 Jun 1998 21:36:27 +0800
Hi, there just like to give some tip or two regarding Network cards
you have in you Linux Box.
I have a 3Com 3c90x in my computer and it's not working up to speed, I
replaced it with anther one of the same kind and the most I get out of
ftp transfers from my machine is a measly 220 KB/s.. Samba acted
weird.. I thought that it's my network that caused the problem,
indeed, it is the problem..
Our network is all Cisco and there seems to be some disagreement
between the two brands, Changed my card to an Intel 'eepro100' and I
can max out at 800 KB/s on a 10 Mbs network.. Cool.
So, if you find out that performance is not as cool (those Win NT guys
may be laughing at you as they did to me wondering why Linux is Super
Slow) as it should be, it could be the network card itself..
Regards,
Wari Wahab
_________________________________________________________________
Tip for using Windows 95 buttons in KDE
From: Jochen A. Stein jst@writeme.com
Date: Fri, 19 Jun 1998 21:05:21 +0200
Following up to Andreas Ehliar's 2cent article in the June Linux
Gazette, I took the same approach and made a patch for KDE to shift
some functionality from ALT to the W95 key. Full instructions and
patch against Beta-4 can be found on
http://home.pages.de/~jst/kde-w95.html.
--
Jochen Stein
_________________________________________________________________
PPP, SLIP and Other Remote Service Support
From: Daniel Blezek blezek@worldnet.att.net
Date: Wed, 17 Jun 1998 22:40:48 -0500
Hi, here's a short tip:
Recently, I started working from home on a UNIX system. The system I
was working on did not support PPP, SLIP, or any other remote service
except shell sessions over a 9600 baud modem. So I decided to download
SLIrP(a program to emulate PPP/SLIP using only a shell session) to the
remote system to emulate PPP over a shell connection. Here is the
snag, the remote system did not support zmodem, ymodem, kermit or any
of the other file transfer protocols. Since I had no TCP/IP
connection, I could not use rsh, or ftp. Solution? I used uuencode to
convert the SLIrP binary to text, started vi on the remote system, and
copied and pasted the entire text(all 360K) into the remote shell
session. After eating dinner, I returned to write the uuencoded binary
to the remote hard disk, uudecoded it, uncompressed it, and started up
SLIrP on the remote system. After pppd came up on my LINUX system, I
was fully connected.
Ain't LINUX fun?
--
Dan
_________________________________________________________________
Published in Linux Gazette Issue 30, July 1998
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
This page maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright © 1998 Specialized Systems Consultants, Inc.
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
News Bytes
Contents:
* News in General
* Software Announcements
_________________________________________________________________
News in General
_________________________________________________________________
August Linux Journal
The August issue of Linux Journal will be hitting the newsstands July
10. The focus of this issue is Navigating Linux and our feature
article is an interview with Marc Andressen and Tom Paqin of Netscape
done by Doc Searls. interview.html is the introduction to this
interview. Check out the Table of Contents at
http://www.linuxjournal.com/issue52/index.html. To subscribe to Linux
Journal, go to http://www.linuxjournal.com/ljsubsorder.html.
_________________________________________________________________
An Invitation: The Future of Linux with Linus Torvalds
On July 14, 1998, at 6:00PM, Taos Mountain ( http://www.taos.com/ ) in
association with the Silicon Valley Linux User Group
(http://www.svlug.org/) will present a panel discussion on THE FUTURE
OF LINUX. Linux is a freely available version of the UNIX operating
system.
Panelists will include Linus Torvalds, the creator of Linux; Robert
Hart from retail Linux distributor Red Hat Software; Larry Augustin of
the Silicon Valley Linux User Group and director of Linux
International, a non-profit consortium of Linux users and vendors; and
Jeremy Allison, the developer of SAMBA. Phillip Hughes, publisher of
Linux Journal, will question the panelists.
Complete Press Release
For more information:
Michael Masterson, MMasterson@taos.com
_________________________________________________________________
LINC: Linux conference in Silicon Valley, California
Mon, 15 Jun 1998 23:56:32 +0000
LINC, the International Linux Conference and Exposition, will be held
in Silicon Valley, California next January.
We have just issued a Call for Papers, and we encourage Linux
developers to send abstracts for talks or tutorials.
More info at: http://lincexpo.org/
Complete Press Release If you have any questions, please mail me.
For more information:
Don Marti, dmarti@electriclichen.com
_________________________________________________________________
Position Available: network security - development/maintenance
Tue, 23 Jun 1998
SecurePipe Communications is currently accepting resumes for a network
security support and development position.
Responsibilities will include support of installed firewalls,
development and maintenance of open-source network security solutions,
and support of existing mail and web servers.
For more information:
http://www.securepipe.com/jobs.html
Joshua Heling, jrh@securepipe.com
SecurePipe Communications, Inc.
_________________________________________________________________
GNU Utilities Integrated Development Environment project
Mon, 15 Jun 1998 08:47:02 GMT
GUIDE: GNU Utilities Integrated Development Environment
The purpose of this project is to merge existing GNU and GPL utilities
into a graphical GPL Integrated Development Environment, which contain
editor, class browser, debugger, profiler, man generator, code
checking, testing, animation, and management.
Go to http://sunsite.auc.dk/GUIDE/ and join the mailing list.
For more information:
Knud Haugaard Sxrensen, khs@mi.aau.dk
_________________________________________________________________
WWW: Linux search engine in beta
Mon, 15 Jun 1998 08:50:18 GMT
Take a look at http://linux.ncg.net/search/
A search engine with a different twist.... We index only Linux related
web pages, and in addition to searching in the robot index, we'll look
up the keywords in our resource listings as well.
The engine uses heuristics to exclude most pages that aren't relevant
to Linux. Currently the engine is in early beta, with a small index of
about 75.000 documents as of 11th June, and growing at a rate of a few
thousand documents pr. day. It might seem small, but the index
contains most of the important Linux sites already, and is getting
quite useful.
Since we track what subjects that are most popular to search for, you
also help us improve the resource listing by testing the engine.
As soon as the indexer is well enough tested, we'll increase indexing
speed dramatically (from 10 documents at a time currently, to about
300).
For more information:
Vidar Hokstad, vidarh@ncg.net
_________________________________________________________________
The Freefire Project (IT security solutions)
Wed, 17 Jun 1998 13:26:28 GMT
After some time in the dark I am happy to Announce the Freefire
Project
The Freefire Project tries to support Developers and Integrators in
building IT Security Solutions (especially Firewalls) based on Free
Tools (Open Source). It is not Operating System dependent, but a lot
of the Tools on the Page can be used with Linux.
The Project features a web site where you can find a lot of useful
links to free Security Tools and Resources. There is a monthly
Bulletin giving some articles about recently discovered tools.
There is a Mailing list for Developers. You DO NOT need to subscribe
if you don't develop tools on your own. In this case it will be enough
to enter you= r E-Mail in the Announce-Form on the Web Pages or
monitor the Web-Pages.
http://www.inka.de/sites/lina/freefire-l/index.en.html
The Start page is also available in German:
http://www.inka.de/sites/lina/freefire-l/index.de.html
Also searching for contributors to the Bulletin and for Links to Tools
which are not yet on the Pages.
For more information:
Bernd Eckenfels. ecki@lina.inka.de
_________________________________________________________________
Linux Links
The Trove Project Press Release: trove.txt
Open Source Devloper Day Press Release: opensource.pr
The Open Source Index: http://home.maine.rr.com/sickthing/osi
List of Linux Mailing Lists: www.linuxrx.com/Lists/Lists.perl
Linux Buyers Guide: http://www.linuxbuyersguide.com/
Linux Applications: http://www.cynetcity.com/cyberzone/497/ Linux Book
Guide: http://members.bellatlantic.net/~ptgeiger/guidehome.htm
Article about Linux in Computer Currents Magazine:
http://www.currents.net/magazine/national/1612/inet1612.html
The Linux Console Tools: http://www.mygale.org/~ydirson/en/lct/
Article "How Linux Could Kill Windows NT":
http://www.zdnet.com/chkpt/adem2fpf/www.anchordesk.com/story/story_224
1.html
Linux Rally: http://www.penguincomputing.com/svlug-rally.html
Time Magazine Article:
http://cgi.pathfinder.com/netly/article/0,2334,13820,00.html
_________________________________________________________________
Software Announcements
_________________________________________________________________
PC-Internet
Check out the new PC-Internet at http://www.pc-internet.com/ (the site
is in Spanish only)
_________________________________________________________________
WrapBit 0.2.1 - virtual object storage and programming environment
Thu Jun 25 12:47:56 1998
The WrapBit version 0.2.1 is now available. Read more about it from
the active server at http://public.comput.com/WrapBit/ WrapBit is a
virtual, persistant, write once object storage and programing
environment. A small kernel serves forge proofed data, meta data and
dynamic views (object invocation). XML is featured (but not imposed)
for object control messages.
_________________________________________________________________
w3mir 1.0.3 - HTTP copying and mirroring tool
Thu Jun 25 12:56:55 1998
w3mir 1.0.3 has been released and is available at
http://www.math.uio.no/ now.
Fixes include
* -R/remove option to remove files is no longer more destructive
than intended.
* Files with 'unsafe' characters in their filename is now saved as
"foo bar" instead of "foo%20bar"
* The -B switch works once again.
w3mir is a all purpose HTTP copying and mirroring tool. The main focus
of w3mir is to create and maintain a browsable copy of one, or
several, remote WWW site(s). Used to the max w3mir can retrive the
contents of several related sites and leave the mirror browseable via
a local web server, or from a filesystem, such as directly from a
CDROM.
w3mir supports HTML4, and has partial support for CSS, Java, ActiveX
and Adobe Acrobat (PDF) files.
_________________________________________________________________
Alphanumeric Paging Software beta test
Mon, 15 Jun 1998 09:02:46 GMT
EtherPage(TM) is now available on Linux
Calling beta testers for our EtherPage product running under Linux. If
interested, you can download software and request an evaluation
license code from http://www.ppt.com/eval/version30.html
EtherPage is a client/server based product for delivering messages
from computers to wireless messaging services such as alphanumeric and
numeric pagers. The product includes a web interface for interactive
use and administration, a command line interface and a C API.
_________________________________________________________________
tomsrtbt-1.4.66
Mon, 15 Jun 1998 09:01:35 GMT
tomsrtbt-1.4.66.tar.gz is available at Sunsite.unc.edu to be placed
into system/recovery and http://www.toms.net/~toenser/rb/.
It is a boot/root rescue/emergency floppy image with more stuff than
fits. Bzip2, 1722Mb formatting, and tighter compilation options helped
jam it on. tomsrtbt is useful for "learn unix on a floppy" as it runs
from ramdisk, includes the man-pages for everything, and behaves in a
generally predictable way.
The home page is: http://www.clark.net/~toehser/rb/.
_________________________________________________________________
MpegTV Player 1.0 released for Linux/Alpha
Mon, 15 Jun 1998 10:30:37 GMT
MpegT@ Player 1.0 has been released for Linux/Alpha. MpegTV Player 1.0
is a realtime software MPEG Video player with audio/sync.
MpegTV Player is a Shareware (US$10) for personal and non-profit use.
Commercial licenses available.
Key features include support for 8 bit, 16 bit and 24 bit display,
random access, frame capture and a VCR-like graphic front-end.
Download MpegTV Player 1.0 (mtv) for linux-alpha from:
ftp://ftp.mgegtw.com/pub/mpeg/mpegtv/player/alpha-unkown-linux/
_________________________________________________________________
Motif Interface Builder VDX 1.2
Mon, 15 Jun 1998 11:16:16 GMT
Release 1.2 of VDX, the Motif Interface Builder for Linux is ready for
download. The VDX provides the interactive design of user interfaces
based on OSF/Motif and generates portable C and C++ source code. Tools
like Resource Editor, Browser and the interactive WYSIWYG View make
the design process very easy. Their object oriented interface and the
adaptable code generation are cool features.
Interested? Visit the VDX Home Page at http://www.bredex.de/EN/vdx/
_________________________________________________________________
R 0.62.1 released: statistical computation and graphics
Wed, 17 Jun 1998 13:20:17 GMT
R version 0.62.1 has been released and will propagate through the CRAN
mirrors within the next few days. The have been lots of changes, any R
user should definetely upgrade to this version.
R is a system for statistical computation and graphics. It consists of
a language plus a run-time environment with graphics, a debugger,
access to certain system functions, and the ability to run programs
stored in script files.
CRAN is a network of ftp and web servers around the world that store
identical, up-to-date, versions of code and documentation for the R
statistical package. Please use the CRAN site nearest to you to
minimise network load.
The CRAN master site can be found at the URL
http://www.ci.tuwien.ac.at/
_________________________________________________________________
Mobitex Radio Modem Driver
Wed, 17 Jun 1998 13:21:40 GMT
Announcing the release of a new network driver which implements the
MASC data link layer protocol, enabling Linux to use Mobitex radio
modems as network devices. Armed with radio modems and a subscription
to a Mobitex operator, you can create a network interconnecting two or
more Linux systems wirelessly using TCP/IP or your own custom
protocol.
The driver has been verified to be stable on 2.0.30 through 2.0.33
kernels and is hence ready for release. The package includes a basic
FAQ list, a HOWTO document, driver source and a couple of tools.
Take a look at ftp://ftp.linuxrx.com/pub/linux-contrib/
_________________________________________________________________
sfm 1.4 - Simple File Manager
Wed, 17 Jun 1998 14:02:46 GMT
Announcing the release a new version of sfm. There's a lot of great
improvements between this version and the 1.1 version.
Some important changes:
* you can associate actions with files (using its extension or its
type given by file(1))
* a popup menu gives you the available commands and shortcuts
For more information look at http://www.chez.com/prigaux/sfm.html
You can find there a binary (i386, glibc, gtk+) version. It has been
tested (not fully) on i386 and solaris.
Any remarks and bug reports are welcome at pixel_@geocities.com.
_________________________________________________________________
Linux Router Project v2.9.2 - networking centric mini-distribution
Sat, 20 Jun 1998 17:32:40 GMT
v2.9.2 of linux router is out. LRP is now fully glibc based, and this
is a very solid release.
You can download it from: ftp://ftp.psychosis.com/linux/linux-router/
And get more info from: http:/www.psychosis.com/linux-router
_________________________________________________________________
Slidedraw-0.10 - drawing/presentation program
Sat, 20 Jun 1998 17:29:11 GMT
Slidedraw is a drawing program for presentation slides.
Some new features added:
* distinct canvas-window/drawing/print size
* grouping of objects, creating composites
* new and improved menu hierarchy
Get it at http://sunsite.unc.edu/pub/Linux/Incoming
_________________________________________________________________
SFS Software's iavaZIP
04 Jun 98 0100 WN
SFS Software announced a new version of it's certified 100% pure Java
compression utility iavaZIP. The full-featured, pioneering file
compression program offers some unique features.
iavaZIP's key advantage is that it lets you create archives containing
files from multiple folders and subfolders--even from different
volumes--in the same session.
iavaZIP is compatible with PKZIP, supports 10 compression levels and
runs cross-platform on every Java 1.1 supported operating system like
Windows 95/NT, Unix, Linux, SGI, AIX and OS/2. The Java Archive format
(JAR) is also supported. The product is available now through
shareware distribution and is priced at $49 for the standard single
user license. Also available are Academic Single user licenses ($29)
and attractive high volume discounts.
SFS Software's WebSite at http:www.sfs-software.com
_________________________________________________________________
Protecting Networks w/SATAN
Mon, 8 Jun 1998 15:48:49 -0700 (PDT)
Because SATAN (Security Administrator's Tool for Analyzing Networks)
could detect weaknesses on other systems (as well as your own) through
its web interface, it earned notoriety when released in April 1995 as
the tool that would "wreak havoc" on the Internet. The Oakland Tribune
even wrote: "It's like randomly mailing automatic rifles to 5000
addresses. I hope some crazy teen doesn't get ahold of one."
But as more and more "mission critical" applications are accessible
through the web, administrators are turning their attention to the
danger of attempted intrusion from outside the networked host. SATAN
is a powerful aid for system administrators. It performs "security
audits," scanning host computers for security vulnerabilities caused
by erroneous configurations or by known software errors in frequently
used programs. O'Reilly's latest release, "Protecting Networks with
SATAN", is an invaluable tool for network and security administrators
working with SATAN.
Protecting Networks with SATAN
By Martin Freiss
1st Edition June 1998 (US)
112 pages, 1-56592-425-8, $19.95 (US$)
http://www.oreilly.com
_________________________________________________________________
Conix 3D Explorer
Wed, 17 Jun 1998 19:51:05 -0800
Conix Enterprises, Inc. announce the release of Conix 3D Explorer on
Linux. With a single command 3D Explorer brings your Mathematica
graphics to life in an interactive OpenGL window, providing advanced
rendering capabilities previously reserved for high-end rendering
systems.
3D Explorer provides a new graphics type, GLGraphics, with extended
graphics primitives and directives. New features include continuous
surfaces, display lists, inline transformations, and per-element
control over all graphics options.
3D Explorer comes with online documentation, including user's guide,
reference manual, programming examples, and demos. Quality email
technical support is provided by Conix Enterprises Inc.,
tech@conix3d.com. For more information, see http://www.conix3d.com
_________________________________________________________________
LinuxCAD v 1.55
Thu, 18 Jun 1998 06:34:43 +0000
Software Forge Inc. announcing the availablity of LinuxCAD v 1.55 at
July 25 , 1998. LinuxCAD v 1.55 includes all hardcopy capabilities
namely:
* output to LaserJet family of printers,
* output to PostScript Black and White as well as Color,
* output to HP-GL compatible plotters,
* output to LinuxCAD MS-Windows print server, in the base version.
LinuxCAD v 1.55 will be priced at the same level $75+tax and shipping.
All users who will prepay LinuxCAD v 1.55 before July 25, will get
extended free upgrades until July 1999.
To learn more about LinuxCAD visit http://www.linuxcad.com
_________________________________________________________________
Nighthawk 2.1 and FunktrackerGOLD 1.5 (announcement)
Mon, 22 Jun 1998 23:13:50 +0930 (CST)
Nighthawk 2.1 (nighthawk-2.1.tgz) and FunktrackerGOLD 1.5
(funktracker-1.5.tgz) have now been released. You can find them on:
http://www.downunder.net.au/~jsno/rel/unix_projects
Nighthawk is an X11 arcade game with sound and music. FunktrackerGOLD
is a digital music tracker. Read my page for more details on them.
Take a look at http://www.downunder.net.au/~jsno both come under the
GNU GPL.
_________________________________________________________________
CYBERSCHEDULER FOR LINUX v2.1
Wed, 24 Jun 1998 18:34:09 -0700
CrossWind Technologies offers CyberScheduler, web-based calendaring
and scheduling software for workgroups. It has been designed to
leverage an organization's existing web resources:
* running on Apache's web server
* with end user access from any desktop browser.
More information about CrossWind Technologies and a live on-line demo
of CyberScheduler is available on the Web site at
http://www.crosswind.com
_________________________________________________________________
Published in Linux Gazette Issue 30, July 1998
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
This page written and maintained by the Editor of Linux Gazette,
gazette@ssc.com
Copyright © 1998 Specialized Systems Consultants, Inc.
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
(?) The Answer Guy (!)
By James T. Dennis, linux-questions-only@ssc.com
Starshine Technical Services, http://www.starshine.org/
_________________________________________________________________
Contents:
(!)Greetings from Jim Dennis
(?) Linux and SCO Keymap --or--
SCO Compatible Console Keymaps?
(?) linux kernel security --or--
Breakin' Out of the chroot() Jail adding "disabilities" to
Linux
(?)Dosemu and virtual terminals? --or--
Clipper/xBase Capacity Problems --- DOSemu as a Solution? "I
don't think so."
(?) NT Domain and Linux --or--
Linux as a "Domain Controller" for a WinNT Domain? Not Yet!
Linux use of an NT PDC/BDC for authentication?
(?) DAO software for linux? --or--
"DAO" (Disk at Once) CDR? Stump Me!
(?)tn3270 security
(?)readdress COM port to 3 or 4
(?) Lilo won't boot --or--
Installed on a Secondary SCSI HD: Lilo Stops at LI
(?)help on unix --or--
Running Unix/Linux Under Win '9x
(?)winprinters & MTAs: Pointers and Corrections
(?) FoxPlus for Linux? --or--
Dreaming about xBase tools for Linux
(?)auto response for email ?
(?)Connecting Linux to Win '95 via Null Modem
(?) PC lockups --or--
Hardware Lockups due to Graphics Load
(?) gzip from C program --or--
Compression Libraries to Link into a C Program
(?)LOVE THE NEW LOOK!!!!
(?)please, advice about Linux and C500 --or--
Linux PPC on the Umax C500 SuperMac: Not A Good Idea
(?)printing Solaris->Linux --or--
Remote lpd from Solaris to Linux
(?) Help Wanted --or--
User Shell on Virtual Console 1
(?) Memory deallocation problems --or--
Linux Memory Usage vs. Leakage
(?)tv cards and dual monitor
____________________________
Greetings from Jim Dennis
Well another month is upon us. This last month was particularly busy
since I was able to afford the USENIX technical conference, in New
Orleans --- the best annual gathering of fellow Unix and Linux nerds
I've ever found. If you can get your boss to send you to just one
computing technical conference in the next year --- ask for it to be
this one (or the USENIX/LISA --- Large Installation Systems
Administration which will be in December).
Linus was there with his wife, Tove, and their two baby daughters. He
agreed to host an "intimate little BoF" (Birds of a Feather
discussion) which turned out to have over half of the conference
attending it (much to his surprise).
The '97 USENIX in Anaheim had a "parallel track" for Linux. This year
had one for "Freenix" (collectively referring to FreeBSD, NetBSD,
OpenBSD, and the GNU HURD, in addition to Linux). It's important for
us (Linux users) to recognize that Linux wasn't the first "free" Unix
kernel, and it is by no means the only one.
I've been trying to encourage the free *BSD users (all variants) to
come out of the woodwork and show up at their local Linux user's group
meetings. I know they'll be welcome at the Silicon Valley LUG
(http://www.svlug.org) and I sincerely hope that they'll be welcome at
other Linux events. Now that we're getting enough market share to get
noticed in the press, and to have some effect on the decisions of
hardware and software vendors (particularly in the areas that relate
to documentation and NDA's) --- it would be a very bad time for us to
get embroiled in the sorts of infighting that's been stifling the
commercial Unix vendors for so long.
I noticed an interesting press release (forwarded to me by my wife)
regarding Microsoft's new "WISE" (Windows Interface Source
Environment: http://www.microsoft.com/win32dev/base/wise.htm) which
basically looks like a scheme to bolster the commercial Unix vendors
up in their battle against the free Unix clones (by providing them
with some limited support for running Windows '95 software). (From the
looks of it the WINE and Bochs projects may eventually be more
capable).
Luckily these, and the other interesting user space projects that are
going to make Linux more accessible to non-technical users, like
GNOME, KDE, and GNUStep are portable. Linux has been a primary
development platform for many of these projects --- but they all run
under other versions of Unix.
So, while it may look like Linux is "taking over the world" --- it is
also opening up a world of opportunity for all of the other Unix
variants. There are now a few million users of Linux that will feel
right at home in just about any Unix on just about any hardware.
Perhaps that's why Sun and SGI are both supporting Linux projects.
_________________________________________________
(?) SCO Compatible Console Keymaps?
From Jim Kjorlaug on 25 Jun 1998
I work for a company that sells vertical solutions using SCO unix as a
platform. We are currently looking at linux as another possible
platform and I have found a possible contention. Does there exist a
keytable that causes the linux keyboard to behave like an SCO console.
I have already worked out the termcap for SCO ansi to work on linux
but some of the keymaps have me stumped. Any suggestions or advice
would be greatly appreciated. I realize that we could modify our
application but it would be much easier if it were possible with a
keytable.
Thanks in advance for any help you can provide.
Jim Kjorlaug
Teleflora Technologies
(!) I don't know how a SCO console keymap is supposed to behave ---
but Linux does have utilities to remap the console keyboard to your
heart's content. All of the popular distributions include the
'loadkeys' and 'dumpkeys' programs (parts of Andries Brouwer's
'kbd' package). You can look at the man pages for these for
details.
I've never used these packages much --- just once to set up "sticky
shift" keys for a friend who lost most of the use in one arm to a
stroke a couple of years ago and again to answer some other
question back before I started this column.
It does seem quite odd that you'd go for console specific binding
rather than using the more portable termcap/terminfo
(curses/ncurses) interfaces which would allow your app to be
accessed via terminals, over modem/dial-up connections, across
telnet sessions and from within xterms. However, I'm sure you have
your reasons.
Yann Dirson is working on a package called "Linux console tools"
which enhances the kbd package.
There is also a console fonts package (the 'setfont' command is
also included with many Linux distributions; it allows you to
choose from among about 100 different VGA/EGA compatible console
fonts, some of which are quite silly). Andries Brouwer is
apparentlly the co-author of the console fonts package, too.
Good luck on the port and welcome to the club.
_________________________________________________
(?) Breakin' Out of the chroot() Jail
Or: adding "disabilities" to Linux
From Ron Arts on 25 Jun 1998
Hello,
I saw a post by your hand from 26 Apr 98 in
comp.os.linux.development.system where you said a lot of noteworthy
things on linux security. Also I have been talking to Jos Vos from
Xopen Systems (who wrote the ipfwadm package).
Both you and he noted the possibility to break out of a chroot jail
(once you become root there). It seems that devices are the weak
factor.
(!) It seems that letting anyone "become root there" is the weak
factor! If we can reduce the need to "become root" --- by providing
mechanisms other than "SUID" and "SGID" programs for accessing
"privileged" operations than we have made some progress.
One approach would be the POSIX.1e "capabilities" (which are more
like VMS style "privileges" than true "capabilities"). There is a
bit of preliminary work being done on this in the 2.1.x kernels ---
but nothing is likely be usable in 2.2 (so you're looking at Linux
2.4 before there is "stable" support for any of that).
Another approach is to limit the damage that 'root' can do using
something like the BSD securelevel features. Last I heard on the
Linux kernel mailing list they had dropped plans to put in simple
'securelevel' support in favor of a "more flexible" approach ---
which would mesh better with the eventual POSIX.1e ("Orange Book")
work.
* (The implementations of 'securelevel' in all of the popular BSD
variants, free and commercial have been vulnerable to a few
attacks via the /proc filesystem and more recently via ptrace()
--- so having Linux adopt one of those designs might not be a
sound idea. We'll see).
I'm a little shy on the implementation details and design but I
think they said it would essentially be a bit field of limitations
that would be set on a per process basis. There would be bits to
prevent various syscalls like mknod(), chroot(), mount(), etc. In
the POSIX.1e model this would later become the "maximum privileges
mask" --- and the individual privileges would be set by meta data
on the executable files (think of that as a list of about 80 "P"
bits rather than just the SUID and SGID bits we have now).
The argument for this is that we could set any set of this bits we
want on the 'init' process (PID 1) to accomplish the same
limitations as we get with BSD's 'securelevel'.
That's a pretty compelling argument so far as I'm concerned. My
main hesitation beyond that has to do with code complexity. The BSD
crowd has been trying to get their 'securelevel' implementations
right for years --- and the ptrace() bug was just found a couple of
weeks ago.
It's not a simple problem. NT's "object" model (and I use the term
"object" very loosely) provides ACL's on files, registry keys, and
all sorts of other OS elements. There is work underway to add ACL
support to Linux --- over some filesystems at least. However, I'm
convinced that ACL's are a fundamentally flawed security model ---
and that opinion is based on some pretty good academic work.
Unfortunately the true capabilities security model entails a
completely different programming paradigm --- it doesn't translate
to Unix conventions at all. In my research (purely "armchair" or
"book larnin'") I spent most of my energy trying to unlearn the
Unix, Netware, and NT approaches.
You can read more about the capabilities security model at Jonathan
Shapiro's "EROS" (extremely reliable OS) web site:
http://www.cis.upenn.edu/~eros/
(EROS is an ongoing research project which will hopefully
eventually be available as a production operating system).
(?) I have been thinking about disabling the mount() or better the
mknod() systemcall when executed from chroot'ed programs (patching the
kernel).
(!) I think the "capabilities" (or Linux "securelevel" or
"privmask") patches will allow you to disable access to these sorts
of syscalls. I also suspect that these "disabilities" (a more apt
description really) will be inherited by all forked processes. They
will certainly need to be immutable (by the process) and will have
to imply certain disabilities with regards to kmem and /proc access
by the 'root' processes that are running within these process
groups.
You can look at the existing patches (in the recent 2.1.1xx
kernels) and possibly build on that.
(?) Do you think that would be worth the effort? We currently run
ftpd, telnetd, sshd and some more things chroot'ed in a very minimal
linux environment. Based on the false assumption that even when you
make it to becoming root you cannot break out of that.
(!) The assumption that the chroot() jail is inescapable by rogue
root processes is very bad. You've discovered that.
The main advantage for chroot() have to do with limiting the number
of SUID/SGID programs that are accessible in the effort to exploit
various vulnerabilities that are used to get root or other
unauthorized access. The other advantage is that you can limit the
amount of snooping that a class of users (anonymous and guestgroup
ftp, for example) can perpetrate on other users on the system.
In other words you can limit the exposure of your "general" users
from some classes of other users. For a long time the most
important element of this was to prevent FTP users from grabbing
your passwd file and running 'crack' on it. With the advent of
shadow password systems that has been much less of a concern.
These days the most common approach to securing systems is to
create special, sacrificial hosts for each service and class of
users. Linux and {Free|Net|Open}-BSD have made this an increasingly
economical and attractive option since we can put any old "junker"
386 or better to work in this sort of role (some people are giving
away 386 and 486 systems these days). This is easy enough for
commercial sites --- but more of a problem for ISP's and
educational sites, which traditionally still have shell access to
at least some of their machines.
(?) I think very few programs use mknod(), and that probably are the
programs you wouldn't allow in a chroot'ed environment anyway. I also
think it would be a relatively small patch, I've done some digging and
- not being a kernel expert - it seemed pretty easy. The only thing
left to find out is how to detect in the kernel that the current env
is chroot'ed.
(!) The kernel obviously already tracks the 'root' directory
(device:inode) for every process. I think it's a field in the uarea
struct (a data structure maintained by the kernel for every
process).
(?) Can you offer any thoughts on this, I'd like to know if I am on
the wrong track (again) here.
Thanks in advance,
Ron Arts
Netland Internet Services
(!) Look at the existing (2.1.x) sources for references to
"securelevel" and "capabilities" --- I'm sure they're in there
somewhere. You can also consider contributing to the Linux Security
Audit project. See the following URL's for more details:
The Linux Weekly News article on it (search on the keyword "audit"):
http://www.lwn.net/980625/
Their currently archives/web site:
http://www.nas.nasa.gov/Pubs/Mail/archive/linux-security-audit/
If your organization needs these features and is willing to donate
some web space and some personnel time and expertise to the project
--- you'll be doing yourself and all of us alot of good.
_________________________________________________
(?) Clipper/xBase Capacity Problems --- DOSemu as a Solution?
"I don't think so."
From Steven Jackson on 25 Jun 1998
Hi AnswerGuy,
I was reading an article on the web about diskless workstations and
redhat when I recognised your name, (I think you helped me out with
redhat a long time ago, thanks).
(!) You're welcome.
(?) I look after a small network of 4 pcs at a doctors surgery which
runs an accounting package and an appointments diary compiled under
Clipper. System Manager is run on the host pc which does all of the
local processing of these applications and the clients run as virtual
terminals.
(!) I don't know what you mean by "system manager" --- from what I
remember/know of dBase and Clipper these were designed as
single-user database systems. The multi-user deployment of xBase
applications normally relies on "record locking" (similar to file
locking but allowing one to request exclusive access to a portion
of a file).
In this model the .DBF files are normally stored on a network
filesystem (Netware, LANtastic, and later WfW among others). I
don't know if Samba or the Mars-NWE (Netware emulator) supports
these forms of record locking.
It is unclear from your description how your are running this. You
mention 4-PC's and Clipper (a DOS based compiler/developement
package for dBase programming), which leads me to think of
networked DOS systems --- then you mention "virtual terminal" which
suggests that you're using a multi-user OS (like Linux).
Are you running DR or CCI's "Concurrent DOS" (or their later
"M-DOS" or "Multi-user DOS") or something like TSL's "PC-MOS"
(another multi-user MS-DOS clone)? Is "System Manager" yet another
multi-user DOS?
(?) Over the past year or so the system has run slower gradually to
the point where it is getting annoying. I'd like to try running linux
on the fileserver and somehow run the dos based clipper programs under
dosemu. I think it would be wise to keep all the *.dbf files on the
server rather than sending them over the network. I got the idea from
the recent Linux Journal article about the Latvian Police dept.
(!)
The first question is:
Why is the performance degenerating?
The obvious suggestions are:
Have you been regularly "pack"-ing your databases (purging
deleted records and transactions)?
Have you been maintaining your indices? (Indexing is usually a
vital key to db performance).
Have you been defragmenting your filesystems regularly?
Has your system utilization increased in some marked way
(you've added *lots* more customers, etc)?
Does your current design have any features or support for
migrating old and inactive records to "archival" or
"historical" databases (tables) so that the "active" db
routines are maintained at feasible sizes?
Are there other activities on your LAN that might be causing
network congestion?
Regarding the notion of running the existing program under DOSemu .
. .
I don't know if that will do any good at all. Since we don't know
what is causing the problem, it seems premature to recommend
solutions. My first thought is that moving the processing from four
systems onto a single one (even a single system under a superior
OS) is unlikely to improve overall performance.
(?) Do you have any ideas about how I could embark upon this?
Thanks,
Steve Jackson
(!) I have many ideas. The first, and most obvious, would be to
port the application to a client/server database design --- one
that's designed to be multi-user and scalable at the outset.
Another, less radical approach would be to take the existing
Clipper sources and port them to Flagship (an xBase to C
development package from WorkGroup Solutions).
... their web pages suggest that they will soon be shipping betas
of a "visual" frontend for xBase programming. That should be
interesting for all those "VB" and "VC++" developers that are still
clinging desperately to Microsoft's platform.
Or you might try X2C from:
http://www.on-the-net.com/x2c/
The questions I asked above may give you some ideas for some
"stopgap" measures (re-index, defrag, migrate inactive records,
etc). In the long run you'll want to do some analysis to see if the
current system can continue to meet your needs.
If you do decide to go with a client server model you have many
choices that run under Linux. There are the free and shareware
packages like mSQL, Beagle and MySQL and there are a number of
commercial packages like InfoFlex Adabas, and the JustLogic SQL.
Rather than give URL's to all of these I'll just point you at the
definitive guide to RDBMS packages for Linux --- maintained by
Christopher B. Browne at:
http://www.hex.net/~cbbrowne/
http://www.ntlug.org/~cbbrowne/rdbms.html
... and another excellent list of Linux business applications
maintained by Linas Vepstas (NOT to be confused with Linus the
kernel guy) at:
http://www.linas.org
http://www.linas.org/linux/db.html
I should mention that you aren't limited to just xBase or SQL ---
there are a number of alternative DBMS system that are available to
Linux and other Unix users and programmers --- including a number
of object-oriented and hybrid systems. Allegedly there's even Linux
support for the venerable Pick system.
_________________________________________________
(?) Linux as a "Domain Controller" for a WinNT Domain? Not Yet!
or: Linux use of an NT PDC/BDC for authentication?
From Cesar Augusto Kant Grossmann on 25 Jun 1998
Hi James!
Again a problem to me, and a exercise to you.
Is it possible to make the Linux Box do login authentication requests
from a NT Domain Server?
(!) Not yet. The Samba team is working on this and hopes to have
something ready within a couple of months. Lest you think this is
all wasted effort (on the thought that Microsoft will ship NT 5.x
in a year or so) --- the indications seem to be that the MS NT
implementation of Kerberos will still rely heavily on the data
structures that they currently use in their PDC/BDC protocol. So,
the work being done now is an investment to the future as well as a
hope for the near-present.
(?) I have a Linux box in a TCP/IP network, part of a large NT Domain,
and want to allow NT domain-users to log in the Linux Box and access
Internet in it. The idea is provide access to the Linux Box without
having to register every user. The users don´t need a regular account,
with home directory, because Internet access is not frequent (thanks
to a low connection) and they only use it to surfing (not email, not
FTP).
(!) Hmm. It looks like I read too much into your first paragraph.
This sounds like you want Linux to be a client to an NT domain
controller. I think there is a PAM (pluggable authentication
module) for doing this.
Since the whole PAM project is still in beta (and not moving nearly
fast enough for my tastes --- not that I've contributed to it nor
that the programmers would want me to) I can't make any promises on
how well it will work.
However the state of PAM can speak for itself at:
http://www.kernel.org/pub/linux/libs/pam/
(Andrew Morgan's pages on the Transmeta sponsored Linux site).
The module you might want to play with is by David Airlie and is
at:
http://www.csn.ul.ie/~airlied/pam_smb/
Other modules (for things like one-time passwords, authentication
on a Netware server, a couple of different "SecureCard" and
"DESGold" cards, RADIUS, and support Kerberos realms, etc) can be
found by browsing around at:
http://www.kernel.org/pub/linux/libs/pam/modules.html
(?) No, I don't want to make the Linux Box act as a firewall (I don't
have authorization to do that). And, again, sorry my bad english...
TIA
Cesar Augusto Kant Grossmann
Uruguaiana - RS - Brasil
(!) Given the muddy murky nature of the term "firewall" the
difference between what you're doing and "acting as a firewall" may
be purely a matter of semantics. However, if it'll keep your
management happy I'll go into a Brazilian court of law as an
"expert witness" to state my opinion that this is not a "firewall."
If by "surfing" you mean that your users will only be using the
Linux system as a web proxy --- why are you fussing with
authenticating them at all? Why not just install Apache and
configure it purely for caching/proxy use --- or use Squid (there
are RPM's avaiable --- they were included with my copies of
S.u.S.E.
Apache, CERN, and Squid can all be configured as caching web
proxy/servers and can all be configured with a variety of
limitations on which systems are allowed through in which
directions. Do you really care which user is logged into the
workstation that is using these proxies? That seems like an odd
requirement unless you're also trying to enforce some other
policies (like certain classes of employees are only allowed to
"surf" during their lunch hour, etc).
I suggest you actually review your requirements a bit further. It
sounds like you are complicating matters more than the situation
requires.
_________________________________________________
(?) "DAO" (Disk at Once) CDR?
Stump me!
From Mark Heath on 25 Jun 1998
Hi there,
I've been searching high and low for DAO (disk at once) CDR recording
software for linux. Does any exist, Commercial or otherwise?
I've email Jeff Arnold about a Linux port and he bluntly refused.
I've email'd HyCD who have a tool that appeared to support DAO and
claimed UNIX support. But their software didn't support DAO and they
weren't interested in a Linux port. I've informed them of this hole in
the Linux software market.
The closest thing that appears to be available is that Joerg
Schilling's cdrecord supports DAO MMC-3 (err i think that is the spec)
Of course my CDR (HP 4020i) isn't MMC compatible.
I've had a look at writing my own but it appears that every CDR has a
different command set to write in DAO mode. I think is was a little
out of my depth, since I couldn't even get the CDR to read raw
sectors.
So your help would be much appreciated. Thanks.
Mark.
(!) Well, you have me stumped.
I don't know anything about the difference between DAO and other
forms of CDR recording. Normally, I'd spend an hour or two hunting
around on Alta Vista, Yahoo!, Savvy Search, DejaNews, etc and
pulling out more of my hair to find out. However, I have a book to
write and a wife to feed, and it is just too close to my deadline
for me to wait until tomorrow.
So, what is DAO and why would you need it? What is the difference
between cdrecord and cdwrite (the one I use with my Ricoh CDR)?
Have you tried them both? What is MMC? Who is Jeff Arnold? Who are
HyCD and should we care enough to start another Linux grassroots
"petition-the-vendor" campaign or should we just write more code to
"do-it-ourselves"?
I'll publish this one --- and let you and the rest of my readership
nail me with the answers. (Naturally I'll bounce you copies of the
other responses as they trickle in).
_________________________________________________
(?) tn3270 security
From Art Blair on 25 Jun 1998
When I try to use tn3270 or X3270 on my redhat 5.0 box to connect to
our school's system I get
TELNET Server: Session security is required.
TELNET Server: Good-bye!!!
Connection closed by foreign host.
Is there a different version of tn3270 that has session security or
some way to enable it with what I have?
Thanx, Art Blair.
(!) Are you sure you want to be using tn3270 (or x3270) to make this
connection? Are you connecting to an IBM mainframe or minicomputer
(presumably using the 3270 "block mode" --- full-screen protocol ---
and EBCDIC)?
Also does your site use Kerberos or some form of SNA security
(encryption or host-to-host authentication)?
The sad fact is that I know nothing about 3270 emulation or about the
SNA protocols. You'll want to contact your site admin or help desk to
find out more about their requirements. They should also be able to
let you know if there are any freely available client/terminal
emulation packages that are suitable for use with their facilities.
(?) please do not publish my email address or use it for advertising
(!) We usually strip out e-mail addresses from the published
version of the column.
_________________________________________________
(?) readdress COM port to 3 or 4
From PJ on 25 Jun 1998
can you tell me how to readdress COM port2 to port 3 or 4? I need to
use COM port 2 for other device.
(!) No. I can't. You'll want to refer to the documentation that
should have come with your hardware (this is almost certainly a
hardware issue that is completely unrelated to the OS or software
that you're running). The details vary among manufacturer, devices
and models.
If you have a couple of COM ports built into your motherboard it is
possible that you can disable or reset the I/O addresses, IRQ's and
other details for your COM ports via the CMOS setup program (the
interface through which you set the date and time, the hard drive
type and geometry and various other firmware settings that are
stored in extra registers of your PC's clock chip --- a chip which
uses CMOS technology so that it dissipates very low power
consumption and is thus suitable for operation off of a battery
while the system is powered down).
This "setup" program is usually (almost always) stored in the
system firmware (the BIOS ROM's on your motherboard) and is
typically accessible at boot/power-up via some system dependent
keystroke. Usually there is a message that is briefly displayed to
note what the magic keystroke would be --- something like:
"Press not to enter Setup"
If that doesn't work (either because your COM ports are not on your
motherboard or for other reasons) you can open up the case and look
at the various DIP and/or "berg" (jumper pins) settings that you'll
find. Some of them may be labelled. There might also be a
manufacturer's mark that might lead you to a website or phone
number where you can get support and documentation for the device.
If you can't find any documentation for some cheap multi-function
(IDE, floppy, COM, and parallel port) card --- your best bet is to
buy a new one (typically $10 to $35 US) and toss the old one into a
drawer as an emergency spare.
As a final note: please consider what it's like to answer such a
question. You give no details about what sort of system you have,
what you've tried (do you have any docs, have you looked at them),
what device you're trying to add (odd that it must be on COM2 ---
how do you know that), what OS distribution and software you're
running, etc.
You send a two line question which cannot be reasonably answered in
less than fifty. In IRC and on most newsgroups and mailing lists
you'd either be ignored or flamed. We're all volunteers here and
the one thing we ask is that you do your homework before you post.
I'm not saying this just to sound crabby (if I was going to be
irate, I'd've just deleted this). If you don't do your homework ---
and put considerably more thought and energy into your questions
than you won't get any satisfaction out of the Linux community.
_________________________________________________
(?) Installed on a Secondary SCSI HD: Lilo Stops at LI
From Rick V Smith on 9 Jun 1998
(?) I have installed linux on my second scsi drive the swap on a small
partition on my first scsi. and lilo on a big mbr for my win 95. the
start of linux went well but when I shut down and went to restart all
that happens is Li and the system hang's
Any Idea's.
Thank's Rick
(!) I don't know what you mean by "and like on a big mbr" --- all
MBR's (master boot records) are the same size on PC's --- one
sector!
It sounds like your BIOS can't "see" the 2nd SCSI drive -- so Lilo
can't "see" it either. The easiest solution would be to install
LOADLIN into a DOS/Win '96 directory --- with a copy of your
kernel(s). The kernel doesn't rely on the BIOS to access your
drives (since it provide 32 bit native drivers for your SCSI card
--- etc) so it will find its root filesystem with no problem.
Another think to try is to add the "linear" switch to your
/etc/lilo.conf --- and then rebuild the boot block and boot map
using the /sbin/lilo command. Read the lilo man pages and/or look
at the lilo "user" and "tech" .dvi files using xdvi (under X
Windows) for details.
There may be other settings that you'll have to tweak to get it
working. This is particularly true if you have a large SCSI drive
(my guess is that your second drive is bigger than 2 Gb -- and your
first one isn't). Look in the CMOS/Setup settings (or whatever your
SCSI card provides) for things that suggest that it is doing
something "fun" to make the large drive "DOS compatible").
Jim,
(?) I found the following line you wrote in a responce to someone else
and this cured my hair loss problem, that probably worked better than
Rogain. Thank's for the time and insight.
Rick
# The stanza for booting Linux.
image = /vmlinuz # The kernel is in /vmlinuz
label = linux # Give it the name "linux"
root = /dev/hda2 # Use /dev/hda2 as the root
filesystem
vga = ask # Prompt for VGA mode
append = "aha152x=0x340,11,7,1"
# Add this to the boot options,
# for detecting the SCSI controller
http://sunsite.unc.edu/LDP/HOWTO/Installation-HOWTO-8.html#ss8.2
_________________________________________________
(?) Running Unix/Linux Under Win '9x
From John Riddoch on the comp.unix.questions newsgroup on 05 Jun 1998
Jeff wrote:
I need a question answered. I am running Windows 95 and soon 98. ...
I was wondering if there is any way to run the unix program itself in
a program window in Win 95,
unix is not a program; it is an operating system. You _cannot_ run two
operating systems at the same time on the same hardware. Dual-booting
is a different matter.
(!) And running an OS under simulation or under a VM is also a
"different matter." Also not that the phrase "OS" is not so
precisely defined that you can defend this position. For example
the IBM mainframes support VM's (virtual machines) that would allow
the concurrent use of multiple OS'. Also consider the case of Tenon
Systems' "MachTen" a microkernel OS that support MacOS running as a
personality under the microkernel.
(?) just like you can run win 95 the same way on a mac.
???? I sincerely doubt it. Perhaps the mac had an emulator that ran
win 95 programs. Apart from anything else, win 95 is i386 only and
won't run on a 68000 (or whatever macs use these days).
(!) He's probably referring to VirtualPC --- an emulation of the
hardware, including CPU, video, disk, I/O, and ethernet chipsets.
There's also RealPC. These are the most popular PC emulators under
MacOS.
Modern Macs run the G3 (PowerPC) processor, and the performance of
Win '95 under VirtualPC is tolerable (about equivalent to a Pentium
90 on a 250 Mhz G3 Powerbook (laptop) and about a Pentium 75 on a
180 Mhz Performa).
Getting back to the original question:
There is a shareware package (distributed as source code and
available for free evaluation) by Kevin Lawton called Bochs.
This started as a PC emulator (hardware) emulation for Unix
(including Linux) that is allegedly capable of supporting Win '95
under emulation. It apparently isn't quite up to supporting NT
(apparently the CPU emulation is only 386 and NT requires 486 or
Pentium emulation). For info on that look at the Bochs web site:
(http://world.std.com/~bochs/). It looks like Kevin will be
upgrading the processor emulation as time (and possibly funds)
allow. Apparently you can License this package for $25. (I haven't
used it yet, but I might send him the money just 'cause I'm so
impressed by the effort).
I know this doesn't answer the question Yet but hang with me a
moment. Someone named David Ross seems to have ported Bochs to the
Win32 platform, thus allegedly allowing one to run Linux, FreeBSD,
or (presumably) most other forms of x86 Unix.
(?) if you can gimmie a hand and maybe tell me some sites where i can
download some software please tell me.
(!) See above.
(?) You might try http://www.linux.org/ for a few pointers. Do some
web searches for linux and read some stuff. RedHat linux 5.0 is a
reasonable version which is nicely pre-packaged for you and fairly
easy to install (http://www.redhat.com/).
(!) Having answered the basic question (where can you find a PC
emulator for Win '9x) I have to add my own suggestion:
Don't do it.
You can buy a cheap PC (even an old used 486) for next to nothing
(I've recently had one 40Mhz 386 given to me for free); and you can
install Linux on that.
(My main household server is a 10 year old 386/33 with 32 Mb of
RAM. Eventually I'll install some extra RAM and a new disk into
that "new" 386 and throw it up as an extra server on my LAN).
Once you have a machine (give it at least 16Mb and at least a 540Mb
drive) then you can just slap a null modem between it and you
desktop machine, or toss in a couple of ether cards and a
cross-over 10BaseT cord (or even by a little 4 or 5 port hub). Once
that's done you can use a terminal package (like Hyperterm, Telix,
or K95 -- Columbia U's Kermit for Win '9x), or even Kermit for DOS)
to connect to the Linux box. If you go the ethernet route you can
use Win '95's 'TELNET.EXE' or you can still use K95 (it's also a
telnet client --- and it's terminal emulation is far less buggy
than Microsoft's --- so you won't need a custom termcap/terminfo
file to run "curses" (Unix/Linux "full scree" terminal/console)
applications).
There are two reasons for me to suggest this approach:
First, you are likely to be very unhappy with the performance of
running any form of Unix under emulation. Although Linux performs
adequately on a 386 with only 16Mb of RAM --- and some kernels can
run in as little as 2Mb --- you'll probably just find emulation to
be too frustrating to be useful --- particularly when using any
Unix networking utilities.
The only two viable reasons I can see for the mode of operation
that you've requested are:
* You want to play with Unix to learn it.
* You want to use Perl/awk, or other text processing tools that are
considered to be "Unix" utilities.
You won't learn as much about Unix by running it under emulation
--- and you'll probably end up being too frustrated by its
performance to come away with a realistic appreciation of it.
In the other case you can get versions of Perl, awk, and most other
Unix utilities, shells, editors and many other tools that have been
ported to Win32 (and even to DOS, often using the GNU'ish Go32 "DOS
extender").
The other reason for my suggestion is that Linux, even on a lowly
386, makes a great server. My box has over 6Gb of online storage
(which I'll probably double in the 40Mhz) a magneto optical drive,
a CD-ROM and a CDR recorder, a 4mm DAT autochanger, a modem line
(which handles uucp, incoming and outgoing fax, dial out
terminal/BBS'ing, dial in terminal, and dial out PPP and will
handle dial-up PPP when I get around to configuring it), a null
modem into the living room (for use from an old XT laptop) and some
other toys.
The machine has currently been up for about three months.
I forget why I rebooted three months ago, maybe I built a new
kernel for it or maybe I just made some changes to the startup
files and wanted to make sure it would come up automatically. It's
been used as my mail gateway and newserver for a few years --- and
it was used as my primary interactive machine (mostly text editing)
for years. My wife and our various house guests sometime still use
it or the dumb terminal to read their mail (if they don't want to
use one of the Pentium's in the living room or in my bedroom).
Sometimes I dial into to it from a client site (I'm a consultant)
or even from some local coffee house using the Ricochet wireless to
telco gateway (offered in selected areas by Metricom:
(http://www.metricom.com/).
You can use Linux as a gateway. Its kernel offers an optional
feature called "IP Masquerading" which is a special form of
"network address translation" (NAT) that allows you to hide a whole
network of computers (using "private net IP addresses" like
10.*.*.*, 192.168.*.* and others defined in the RFC 1918). It is
trivial to install a package called 'diald' that will dial up your
ISP on demand (automatically when any of you computers try to
access the Internet -- or any other non-local nets) and will
automatically drop the line after a configurable period of
inactivity. This puts virtually no load on a machine (not
measurable on my 386!).
Another handy server role you can assign to your Unix box (Linux or
otherwise) is as a household schedule/reminder service. The Unix
'cron' and 'at' facilities are just perfect for this. You can write
simple scripts and schedule them for periodic execution (cron) or
for one time execution in the future (at). With slightly more
complex scripts (using the GNU 'date' command, and simple shell
conditionals and tests) you can do arbitrarily complex scheduling.
It is truly easy to set this up to automatically e-mail you
reminders post them to your "intranet web server" or to even page
you (using a normal modem) as an alarm service.
Eventually I expect someone to release a set of CGI scripts to act
as a front end to a reminder/alarm service --- which you could toss
up on your "intranet" server.
Using a little box as an "intranet" web server for a household or
small business also takes almost no memory or CPU power on a Linux
or FreeBSD box. I think the overhead is about 70K for a small web
server, and you can even configure them to be "dynamically" loaded
if you're really pressed for RAM. The little box can also function
as a fileserver for you Win '95 box by using Samba, a Unix package
that provides Windows/NT compatible file sharing. It's easy to run
all of these functions on the same box, they don't conflict with
one another at all, and most of them present very little load on
the server.
On top of all that you can use the old clunker to run household
appliance over the old BSR X-10 "Powerhouse" interface (also sold
as "ActiveHome"). Larry Wall just gave a talk at the Silicon Valley
Linux user's group showing us a demo of how he's automated his
house. It was incredibly amusing. He has a detector on his clothes
dryer, in the garage, that announces through the household PA
system when the laundry is done; and motion sensors on the walk way
leading up to the front door to announce visitors, and scripts to
tell his wife and kids when they get mail (presumably he gets too
much mail to want such an announcment for himself).
Naturally you can put a sound card in the PC and run PA/Speakers
off of it to do various cool things.
The point is that you can't do all of this when you're running Unix
in an emulator under Win '95 (since the chances are too great that
you'll need to reboot it, and also since your emulator won't have
access to most of the hardware that we're talking about --- it can
only access the virtual/emulated hardware. The other problem is
that Win '95 is generally not nearly as stable as any form of Unix.
Even NT doesn't come close to Linux, FreeBSD, or any of the popular
forms of Unix for stability.
For the same reasons you won't benefit nearly as much from a dual
or multi boot configuration. There's not much point to having a
"server" that you keep rebooting to play Doom (which is available
for Linux, BTW) or to read that MS Word document.
Although I've focused on Linux (and I prefer it for my personal
use) all of what I've said applies to FreeBSD, NetBSD, and OpenBSD
among others. (There are some differences, the *BSD's don't have
their NAT/masquerading and packet filtering in the kernel -- it's
run as a user process, things like that. If you're learning Unix
for professional reasons I'd definitely suggest that you clock in
some time and practice on any one of the BSD systems as well as on
a Linux box. Potential employers (in Unix savvy companies) will be
far more intrigued by entry level applicants who've worked with
BSD.
Also, if you want to play with the X Window system (the dominant
tehnology for supporting GUI's under Unix --- though, technically,
it is a communications protocol and programming API --- and not a
"GUI") you won't want to run it on less than a Pentium. In that
situation I'd put one (character only) installation on the
cheap/used PC and install a dual boot configuration on your main
(Win '95) workstation. The best way to do that is to install an
extra hard drive on the workstation (so you don't need to
repartition your existing drives).
Even you decide to put one of the BSD's on your cheap/used server
you should probably still put Linux on your Win '9x workstation.
There are two reasons for this:
1. there are more commercially available productivity applications
available for Linux (WordPerfect, StarOffice, Applixware, Cliq,
Wingz, etc).
2. Linux has very good support for DOS, and Windows filesystems (and
even some, read-only and even NTFS and HPFS). You can even install
a small Linux distributions directly into a DOS subdirectory.
You could install Linux on the workstation and have it access most
of its files (almost all of them) over the network (over NFS). All
you need on a Unix box is a fairly small "root" filesystem. 20 Mb
is enough for all the "root" files (all you really need is /etc,
/dev/, and /sbin -- the rest can all be mounted over the LAN though
I'd suggest adding a local swap file or partition, and a local /tmp
directory).
If you do an installation like this: (with one server installation
on a dedicated PC and another on your workstation -- say FreeBSD on
the server and Linux on a multi-boot for your Win '9x box) you'll
get the maximum benefits and you'll learn enough about Unix to
qualify for professional work in the field.
So, in conclusion: You won't learn nearly as much about Unix from
any form of "emulation" or dual-boot arrangement. The principle
advantage of Unix has always been the client server model it uses.
Unix "wants" to be a server. It's as important to learn this
philosophy as it is to learn the syntax for a couple hundred Unix
commands. So, that's the best approach to installing and learning
it around your house.
_________________________________________________
(?) winprinters & MTAs
Pointers and Corrections
From John Levon on 05 Jun 1998
Hi, two points:
1) for win printers, someone has written a PPA driver. i don't have
the URL, but it was mentioned in 2 cent tips a while ago i think. This
possibly enables win printers to be used with linux
(!) In fact I had heard of it. However, it had not progressed far
enough along, last I checked, to be worth mention in LG. It's a
tough call for me whether to go dig up the latest scoop on a
digression or whether to gloss over it in the interests of
conveying the more important message.
The important message is that "Winprinters" and "Winmodems" are a
big lose for everyone involved (even for Windows '95 users, who may
find them "abandoned" in future versions of Windows and NT). These
are not "progressive" developments in the hardware market. The
other important message is that we shouldn't have to reverse
engineer these protocols.
While I admire the heroic efforts of people like Andrew Tridgell
(original architect of Samba, who implemented it by analysis of the
packets off "the wire")
For those that are interested in some info on the HP PPA printer
drivers for Ghostscript and Linux look at:
Ghostscript Printer Compatibility
http://www.cs.wisc.edu/~ghost/printer.html
... and follow their link to:
(Tim Norman's) PPA for the masses
http://www.rpi.edu/~normat/technical/ppa/index.html
... and for other printer stuff for Linux try Goob's:
Linux Links: Software : Utilities : Printer
http://www.linuxlinks.com/Software/Utilities/Printer/
(?) 2) instead of www.faq.org, try www.faqs.org. this is a top site
that automatically contains HTML versions of FAQs on rtfm.mit.edu
thanks,
john.
(!) Doh! I looked for that by memory and tried "faq.org" first. I
didn't think to try "faqs.org" (and it wasn't in the bookmark file
on the machine I was typing from at that moment). I remember being
impressed with faqs.org and as disappointed when I look "back"
(finding the wrong one).
Thanks for catching that!
_________________________________________________
(?) Dreaming about xBase tools for Linux
From Michael "Mookie" Kepler on the L.U.S.T List on 04 Jun 1998
Is there a FoxPlus program for Linux ? When I use the SCO FoxPlus on
Linux with iBCS module running, it can not read the data files.
Thanks,
Jyh-shing Chen
Michael "Mookie" Kepler
Ha! Dream on! I'm decloaking and posting just because I'm glad to meet
another living dinosaur. I, too, have too much experience with and an
irrational attachment to FoxPlus.
(!) I presume Fox-Plus is an xBase product related or similar to
FoxPro. If so you might look at WorkGroup Solutions "Flagship"
(http://www.wgs.com/fsad.html).
This is a full dBase compatible system, and xBase compiler.
(Actually I think it does a "compile to C" --- then you'd use gcc
to actually produce your binaries. That makes it more portable I
suppose).
You could also look at Christopher B. Browne's incredible annotated
link farm of Linux business and productivity applications:
http://www.hex.net/~cbbrowne/
... which has a page specifically one xBase dbms packages for Linux
at: http://www.ntlug.org/~cbbrowne/rdbms05.html
Oddly enough Christopher doesn't mention Versasoft's dbMan (dbMan
IV or dbMan 5.x). Perhaps the product has been discontinued. I
couldn't find any URL for it though there are a number of
references. I just guessed at "versasoft.com" and glanced at their
web site, which only mentions one product (VersaTOOLS; a FoxPro
add-on?). I've blind-copied the one e-mail address listed thereon,
so that he can respond with any info on the fate of dbMan, if he
feels so inclined.
So in answer to your question:
Yes! Dream on! There are dbms apps for Linux, and you DON'T have to
use SQL.
(Also, if you ever want to work with a dbms package that's less
like "DOS" and xBase, nothing like SQL and more like Unix shell
script programming, look at Revolutionary Software's package: /rdb
-- they have a Linux version. Apparently this /rdb is related to
Rand/Hobbs RDB -- Christopher's pages talk about this a little bit.
(?) I made my living pushing the limits of Sco FoxPlus for five years,
starting in 1989, making it do things it was never meant to do. It is
frustrating that so many people think that SQL and Relational are
synonyms, and that Relational and XBase are mutually exclusive. Every
database application I created with FoxPlus conformed to the
Relational data model. There is nothing in FoxPlus to prevent this.
Please let me know if you find anything FoxPlus-esque that works under
Linux. I've been looking myself and have found nothing comparable. If
they would just release the source code, we could get somewhere.
Whenever I encounter a trivial programming task, especially ones
involving tabular data, I always think of how much quicker and easier
it would be to turn it out in FoxPlus than 'C', or _shudder_ PlSql
(yuck!).
____________________________
(?) From Thomas Good on the L.U.S.T List on 5 Jun 1998
Jim - I have the opposite problem. I want to lose foxpro in favour of
SQL. I run an odd mix of dbs including Postgres, Progress and FoxPro.
The foxpro is sitting on a dos box and is need of extinction. It is
(obviously) single user and so the person who sits on the box has to
do all of the data input and answer the phone - doing queries as
requested.
I am moving her data onto a linux box and I want to shift the code
from foxpro to SQL. Any converters out there? Front end is not too
important as I will use perl (5 with DBI 0.91 and DBD-Pg 0.69). I just
need to rework the existing queries...thanks!
Tom
----------- Sisters of Charity Medical Center ----------
Department of Psychiatry
Thomas Good, System Administrator
North Richmond CMHC/Residential Services
(!) Look at Christopher's web pages (I cited it in my longer
message but it's at: http://www.hex.net/~cbbrowne/)
Specifically he lists a some conversion utilities and .DBF
libraries at: http://www.ntlug.org/~cbbrowne/rdbms05.html
Also don't forget to check the LSM (Linux Software Map). Here's a
couple of entries from there (not listed on CBB's pages):
.......
Title: Light DBF client/server dbms (LDBF)
Version: 0.9.9 beta
Entered-date: 17NOV95
Description: This is client/server dbms that operate with
DBF files and compatible with Foxpro CDX indexes.
Clients connecting with server via TCP/IP
and works with databases as on local machine.
Supports transactions,multi-user operation,
stored procedures,triggers,
password security,logging all operations,
flexible configuration.Implemented main suite of
xBase operators.
Includes DLL of LDBF API for Windows.
Keywords: LDBF,ldbf
Author: vlad@torn.ktts.kharkov.ua (Vlad Seriakov)
Maintained-by: vlad@torn.ktts.kharkov.ua
Primary-site: sunsite.unc.edu (/pub/Linux/Incoming)
707 Kb ldbf-0.9.9.tar.gz
930 b ldbf.lsm
Alternate-site: ftp.kiae.su ( /linux/misc )
Original-site:
Platforms: Linux 1.2.0 or later with IPC support
Copying-policy: Freeware
.......
Title: dbview
Version: 1.0.0
Entered-date: 20APR96
Description: dbview is a little tool that will display dBase III and
IV files. You can also use it to convert your old .dbf
files for further use with Unix.
Keywords: database dbase view convert
Author: joey@infodrom.north.de (Martin Schulze)
Maintained-by: joey@infodrom.north.de (Martin Schulze)
Primary-site: sunsite.unc.edu /pub/Linux/apps/databases
10kB dbview-1.0.0.tar.gz
Original-site: ftp.infodrom.north.de /pub/Linux/Devel/dbview
10kB dbview-1.0.0.tar.gz
Copying-policy: GPL
.......
Title: libdbf
Version: 1.4
Description: Tools for manipulating dBase files
Keywords: unix dbase
Author: beacker@sgi.com
Maintained-by: Nobody to my knowledge
Primary-site: Wherever you put it.
Original-site: news::comp.sources.misc
Platforms: Unix (This copy linuxified)
Copying-policy: No commercial use, no charging for distribution (see README).
Entered-date: 01JAN96
Those were all found just using the "dbf" search string on a local
copy of the LSM (just a text file I keep around since I do so much
Linux support work).
There's are several Linux Software Map search engines and
searchable Linux Software Database sites out on the web. I don't
even have a "favorite" one any more.
Try:
Linux Search Database
http://www.egypt.pca.net/LSDB/lynx.html
... which found this one:
Title: AppGEN
Version: 0.2 alpha
Entered Date: 11JUL96
Description: Database application generator and 4GL for Postgres95 and
HTTPD. DBase DBF file to SQL Convertor.
Key Words: Application Generator 4GL SQL Web WWW Forms Postgres95 DBF
Author: Andrew Whaley
Primary Site: sunsite.unc.edu /pub/Linux/apps/databases/postgres
appgen-0.2-alpha.tar.gz
Alternate Site: GPL'ish End
... or try:
Linux Links (by Goob!) at:
http://www.croftj.net/~goob/
(The search engine is not too hot, but the hierarchy of links is
great). There is a reference there to a semi-free package called
X2c (the portable xBase compiler). X2c seems to have some features
for creating binary CGI interfaces to your DBF databases. Which
might be an alternative to converting it to SQL, if you aren't
worried about some of the concurrency and integrity and business
rules enforcement that are associated with SQL --- or even if you
just need a quick interim solution to use while you're doing the
xBase to SQL port.
Another place to check into is:
The #LinuxOS Webpage: Linux Software Search Engines and Indices
http://www.linuxos.org/Lsoftsearch.html
As the name suggests that site is maintained by principals of the
#LinuxOS IRC channel on EFNet and it contains a list of Linux link
farms, search engines and indices (what a surprise!).
So, I'd say there's plenty of places to look.
____________________________
(?) From Michael Kepler on the L.U.S.T List on 5 Jun 1998
I'd just like to thank you (Jim Dennis) for your very comprehensive
and helpful responses to the XBase question. I had no idea there were
so many database options available for Linux. I joined this
conversation out of idle personal interest, but now I think I see some
possibilities for solutions for current needs we have at our company.
Thanks again,
Michael Kepler
VP Systems Development
Metro One Telecommunications
_________________________________________________
(?) auto response for email ?
From Ted via the L.U.S.T List on 04 Jun 1998
Whatever you do, don't do this if you are on a mailing list. Think
about the consequences...
Ted the Lurker
(!) Ted, when replying to L.U.S.T. messages, please remove the
extraneous quoting.
(?) Hi,
How does one set up sendmail for automatically responding to an email
indicating that one is out of the office and will be responding to the
incoming emails at a later date ?
Thanks,
Jyh-shing Chen
(!) Normally one doesn't set up 'sendmail' to do the automated
response. Normally one would put in a .forward file with something
like:
"| /usr/local/bin/vacation...."
(or something like that).
There is an old program named "vacation" (written by Eric Allman,
author of sendmail) which can be used for this purpose. You can
read the man page for it if you like. It does some checks to
prevent replies to mailing lists (looks for a "Precedence: bulk"
header line) and system accounts (Mailer-Daemon, Postmaster, etc).
It also maintains a "cache" of addresses to which the "vacation
message" (or other auto-response) has been sent to prevent spurious
(and very annoying) duplicate responses to the same address.
(In other words, if you really are "on vacation" and someone
routinely copies you on some sort of mail, usually as part of a
workgroup list, they only need to hear about it once. I think
vacation defaults to an eight day limit between responses).
That would be one way one might do it.
However, this is Linux and there are even better ways. Most Linux
distributions default to 'sendmail' as the MTA (mail transport
agent) and use procmail as the MDA (mail delivery agent). (You
presumably use elm, pine, MH, or whatever you like as your MUA ---
mail user agent).
'procmail' is a "mail processing package" consisting of a few small
programs that you call upon via your own .procmailrc scripts. I
wrote an article about them for Linux Gazette about a year ago. You
can still find it, and some hot links, at the
http://www.linuxgazette.com/ web site.
The procmail documentation is a bit confusing so let me offer a
couple of quick notes: procmail is a very simple scripting
language. A procmail program consists of a list of "recipes" When
an item arrives (is delivered via procmail) the procmail binary
traverses the script from the top, scanning for the beginnings of
recipes (usually starting with a line like):
:0
... or
:0 B
(where B is a "flag" --- and there are several of those which mean
different things).
The rest of each recipe consists of some number of "conditions"
(patterns) and one "action" (disposition). Each of the condition
lines is of the form:
* ^From:.*foo...
... where ^From:.*foo... is a regular expression that is checked
against portions of the mail message that is currently "in hand"
(as it were). Usually your patterns will only be applied to the
messages headers. If you use the B flag on the recipe line or you
can put flags on your condition lines using a syntax like: * B ??
$PATTERN (where you replace $PATTERN with the regex for your
pattern).
All of the conditions which are logically AND'ed for each recipe
--- so something like:
:0
* ^From: joe.*
* ^Precedence: bulk
... would match mail that was from joe (in this case any joe at any
address) AND had a header indicating that is was of "bulk"
precedence.
After any/all of your condition lines, in a given recipe you have
an action line. The actions you can take are:
"file it"
"forward it"
"pipe it into a program" (such as an autoreply 'bot).
To "forget it" you just "file it" to /dev/null. In general any
filename on the action line will be consider to be a mail folder.
Any filename with no path elements will be considered standard mbox
(elm/pine compatible) folder under your ~/Mail directory (??).
(Normally you'll have a MAILDIR variable set. You can assign and
reference variables in procmail in pretty much the same ways as in
sh (Bourne shell)).
A name that refers to a directory will cause procmail to write each
message into a separate file in that directory (this is called a
"directory folder"). If you use a folder of the form: foo/. then
procmail will write the messages into the $MAILDIR/foo/ directory
using an MH compatible name and format.
To forward your mail you start the action line with a "!" (bang)
and simply give it an address. Be very careful about forwarding to
any address that might have its own procmail or other forwarding
agent attached. Otherwise you'll create a mail loop. For this
reason most procmail wizards never use the "!" forwarding operator
--- they pass the message to a pipe, adding their own headers and
formatting the message to the new address (still forwarding it --
but with some checks and changes in the headers).
So, here's how you pipe the message (to forward or autoreply) You
start your action line with a | (pipe) symbol and the rest is just
the command line. The procmail suite comes with a program called
'formail' (FORmat some MAIL headers).
So if you pipe mail to formail with the "-r" switch it will format
a "reply" and if you add the -A switch it will "Add" a custom
header line (replacing any previously matching header).
Here's an example:
:0
* !^FROM_MAILER
* !^FROM_DAEMON
* < 10000
* ^Subject: info
* !^X-Loop: info@starshine.org
| ((formail -rk -A "Precedence: junk" \
-A "X-Loop: info@starshine.org" ; \
echo "Info Request received on:" `date`) \
| $HOME/insert.doc -v file=$DOC/general.info) | $SENDMAIL -t -oi
-oe
... note this one is unusually complex since I am "keeping" the
senders message, checking if the whole thing is over 10K, appending
the date on which I received the message, and inserting (via a two
line awk script named "insert.doc") a response. Also those
"FROM_MAILER" and "FROM_DAEMON" patterns are a couple of "magic"
patterns that procmail recognizes --- they are actually expanded to
some hefty regexes internally.
... in other words, this action line is doing alot more than most
auto-reply. The point is that I can use formail to create the reply
headers (which it gets by filtering the header as procmail passes
the header and body of the mail into the pipe). I can then ship the
results of that to some other process (to do other processing on
the body or whatever) and finally passing that all to a copy of
sendmail (the full, local path to which is conveniently stored in
the $SENDMAIL variable). The -t switch on 'sendmail' means: "Take
the 'to' addresses from the headers on your standard input" ---
this is the safest and cleanest way to pipe messages into sendmail.
That's a short course on procmail. The tutorial I wrote for Linux
Gazette is even more basic than that --- so if I rattled through
some of that too fast: go read it.
One last note: There are 5 man pages on procmail, one for the
binary, one on the rc file syntax (the programming language) one
that's full of examples, and another on the "weighted scoring"
extensions (which allow you to add and subtract values to a
"weight" using various conditional patterns, which can be sensitive
to how many times a pattern appears in a message --- so you could
automatically descriminate against messages that were more than
have "quoted" lines).
The weighted scoring stuff is high wizardry --- I don't use it. The
examples are mostly suitable for cut and paste.
Keep in mind that you can call all sorts of programs, not just
'formail' --- so you could write a simple procmail script call on a
"sendpage" program when someone really important sends you mail
about something "really important"
Also 'formail' has the -D switch, which means one thing if used in
conjunction with -r (the combo means, "Don't duplicate" our reply
-- like vacation; where it checks for the ). It means something
else when used without the -r (don't deliver to this folder if this
is a duplicate according to the Message-ID: header line). Both
meanings have quite a bit to do with "duplication" --- but are much
different in usage.
If you subscribe to lists, like L.U.S.T, I suggest procmail for
auto sorting your mail. When you want to add auto replies --- even
if you're just going to call on Eric's 'vacation' program, you
should add that as a recipe after any procmail sorting (and spam
filtering) and with the * !^FROM_ and X-Loop: patterns. That will
prevent auto-replies to mailing lists that don't put in their
"Precedence: Bulk" line, and that might be from daemons and mailers
(other auto responders) that 'vacation' doesn't "see" ('procmail'
and 'formail' are more recent and benefit from a few more years of
experience with Internet "standards drift").
One of these days I may write a whole book on procmail. It would be
pretty short (like the O'Reilly 'vi' book, or their one on
"termcaps"). It's a very powerful utility that currently is passed
on as an "oral tradition" among sysadmins and Unix hacks. I think I
heard that TDG (the dotfile generator) provides a menu-driven
(GUI?) front end to creating .procmailrc files --- among many
others. That would probably be a good place to look for more info.
[He may have read about it in issue 17's article -- Heather]
_________________________________________________
(?) Connecting Linux to Win '95 via Null Modem
From Chris Gushue on 04 Jun 1998
I have two systems, a 486 and a K6, and I was wondering how (if) I
could connect them using a serial (null modem) cable. One system will
be running Windows 98, the other running Linux. I can't seem to find
any info on the LDP or other webpages. Thanks.
(!) Certainly you can connect them for some purposes.
I don't know anything about Win '98 but I presume it comes with
some sort of terminal emulation package (like the Hyperterm that MS
licensed from Hilgreave for Win '95, or that cheesy old "Terminal"
that they used to ship with Windows 3.x).
You could also get any of several shareware, free, or commercial
communications packages such as Telix (Windows or DOS), Kermit
(DOS) or K95 (Windows), etc.
All of these should have a "direct" or "null modem" option listed
among their "connection/modem" types.
This will give you a basic, character modem terminal login to your
Linux box. This not a networking connection --- it is just like
connecting a dumb terminal to the machine (which still gives you
access to most of the applications and almost all of the utilities
and programming tools on your Linux system).
If you want networking between these two systems, over the serial
line; that's a different story. You should be able to establish a
SLIP or PPP connection between the two. Once you've done that you
could run any of the TCP/IP protocols over the line. However, it's
much trickier to do that --- and I have no idea how Win '98 will
handle it.
(Under early revisions of Win '95 I remember complaints that the
supplied PPP drivers and their user interface was configured to
work with MSN (Microsoft Network --- their ISP) and that it
required some utility from the "Plus Pack" to allow one to create
and maintain a "chat" script --- a way to log in and
configure/establish a PPP session with any other ISP.
It seems that MS also added features in their NT 4.x (RAS?, RRAS?)
that allow these systems to act as recipients of the stock Win '95
MS-CHAP authentication method. I guess this was a bid to convince
ISP's to adopt Windows NT for their work.
Meanwhile Gert Doering (and others?) released the AutoPPP
extensions or patches to 'mgetty.'
'mgetty' is Gert's very popular "modem getty" line that allows a
modem line to be shared between terminal, fax, network and even
voice (with some modems) for both incoming and outgoing use. One of
the features of 'mgetty' is that it can be configured to recognize
certain login strings ("user name patterns") as a directive to use
an alternative 'login' program.
Thus you can configure you modem line to use ppplogin when given a
"user" name of the form: Pmaryjoe, and to use a traditional 'login'
when presented with others.
I personally haven't set up AutoPPP. However, a quick Yahoo! search
on the string: "+mgetty +autoppp" gives about 450 Alta Vista hits.
Most of these are from the Linux ISP mailing list. I didn't spot
any that covered AutoPPP over a null modem.
Trying a search string like: + "null modem" +mgetty +win + "95"
... didn't help either. Though it did return a bunch of links to
Linux Gazett mirror sites carrying issues 18, 25, and 28 (false
hits in this case)
Somewhere on the Linux ISP mailing list archives I found a thread
about "null serial" that was on target but not very informative.
Someone mentioned that the Win '95 PPP couldn't handle direct
connection --- and suggested Trumpet Winsock (a third party TCP/IP
suite for Windows --- and DOS --- for years before MS had ever
heard of TCP/IP).
So, it may not be easy to get networking configured over a null
modem line so long as Win '9x is on one end of it. However, I bet
it would be possible. You should probabl create a "modem emulation"
driver for Linux that would allow the Win '9x box to work as though
it were sending AT commands to a modem. The "modem emulation"
driver could implement a small AT command subset (responding to
every valid +++AT sequence with "OK" or the
appropriate response).
In the long run it's probably far easier to buy a couple of
ethernet cards (less than $30 each) and a 10baseT "cross over"
cable (necessary if you're not going through a hub, and sometimes
necessary to cascade one hub off of another). Not only is ethernet
much faster than serial --- it is currently much easier to
configure and support (for networking). Another advantage is that
you can later expand; buy a 4, 5 or 8 port ethernet hub and you can
wire up the whole house (actually I've almost filled two 8 port
hubs here --- but I'm a little different).
Conclusion: You can easily use the serial/null modem for simple
terminal access. You might be able to get it working as a
networking interface, but you might have quite a bit of trouble
convincing Win '9x to do PPP over a "direct" or "null modem"
connection. So you might have to look for a third party PPP
replacement (which may need to be upgraded between the Win '95 and
Win '98 versions) --- or you might be able to write some weird
"modem emulation" on the Linux side. For networking it will be much
easier to buy a couple of ethernet cards.
____________________________
(?) Linux help
From Chris Gushue on 04 Jun 1998
Thanks a lot for your thorough and quick response! It was just what I
was looking for, just a basic login to my Linux box to play around
with it until I get around to buying a hub and network cards. It kind
of funny though, using my K6/233 Win98 machine as a dumb terminal to
my 486/100 Linux box :-)
(!) I was using that VAResearch machine that I reviewed for the
Linux Journal ("betelgeuse": a 266Mhz PII with 64Mb of RAM and a
4Mb Matrox Millenium video) as a dumb terminal to my old 33Mhz 386
("antares") for months. The old 386 was where all my mail and news
was. It's still the network hub, mail and news server for the house
(though now I use 'fetchmail' everything over to "canopus" a home
built P166; the wife mostly took over the PII).
The 386 is the most stable machine in the house -- it's the only
one on a UPS.
_________________________________________________
(?) Hardware Lockups due to Graphics Load
From Brad Alexander on 30 May 1998
Hi Jim,
This isn't Linux-specific, but I'm having a problem and I'm hoping you
can help me come up with a workaround that isn't going to cost a lot
of money.
I have an Intel P-100 on an Amptron AM-7900 board with 64MB of EDO RAM
(2 32MB sticks), a gob of hard drives (a 2.2MB Quantum Fireball IDE
and a FutureDomain SCSI controller with a 420MB Conner, a 1GB Seagate,
1GB Micropolis and 1GB Quantum Empire), a Diamond Stealth 64 with 2MB
DRAM, and a SoundBlaster 16 Plug'n'Pray.
I'm running a heavily modified RedHat 5.0 machine with an 800MB DOS
partition on /dev/hda1 and a 200MB win95 partition on /dev/hda3
(Linux's /+/usr is on /dev/hda2).
I have been seeing system lockups for quite a while now. I noticed
them when running xlock in random mode initially, then noticed that I
was also starting to have problems with some of my dos apps, like
Jane's Longbow and Duke Nukem locking up. Under Linux, I settled on
using xlock in galaxy mode, and the lockups dropped to every couple of
weeks. (Note that during this time, I upgraded memory from 4 8MB
sticks to 2 32s.)
Everything went all right until I upgraded to RedHat 5.0, with XFree86
3.3.1. The lockups increased to about every 2 days. Once I upgraded to
XFree86 3.3.2, they dropped back down to about once a week.
I'm basically using you as a sounding board to see if I might have
missed something. I'm thinking its hardware, but where? The stealth?
The lockups seem to occur during graphics app use, xlock, or the gimp.
The motherboard? The chip? What can I start replacing without sinking
a whole bunch of money into it?
Thanks in advance,
--Brad Alexander
(!) Well, the first thought would be to try a different video card.
I don't have too much confidence that the problem is truly related
to the video card's activity --- so it's just a diagnostics start.
To see if this really is related to graphics, boot up the system in
text mode (don't run X, change your runlevel or initdefault to one
of the non-xdm modes if necessary). Now you can run a couple of
kernel builds on it (that's usually a pretty good stress test. Try
'make -j' to work it harder.
It would also be helpful to know what sort of lockup you're
getting. It may be that you could still login via a serial port
(using a null modem and a laptop or any other nearby computer or
terminal). Do do this simply add a line like
t1:23:respawn:/sbin/agetty -L 38400,19200,9600,2400,1200 ttyS1
vt100
... to your /etc/inittab. This should allow you to use one of your
serial lines to login. It is possible for the Linux X Windows
system and console to be dead while the kernel and other processes
are still up and running. Another test is to ping it from another
system (if you have an ethernet LAN connected to this machine).
Even if telnet doesn't work you want to ping it to see if the
kernel is still responding.
It's also probably worth trying the software watchdog timer code in
the newer kernels. These allow you to configure a kernel module to
emulate a hardware watchdog timer card. These WDT devices are
basically a "dead man's switch" for your system. If the timer isn't
periodically updated by the kernel (or by some other thread in the
kernel, in the case of the emulated WDT) then the WDT triggers a
system reset.
Obviously a software emulation of this isn't quite as reliable as a
hardware WDT --- since a completely hung kernel will never get
around to calling on that module's thread of execution. However, it
isn't too unlikely that the hang is in some specific kernel thread
and that some other thread continues to execute after other parts
have died.
Frankly I'm not sure what the difference between the kernel
watchdog emulation code and the boot "panic=" parameter. But that's
definitely another thing to try (just add something like panic=60
to your lilo "append=" directive, or manually when you boot up your
system). I guess that the difference would be that there may be
some conditions under which the kernel could get into a comatose or
unresponsive state without panic'ing (if it got tricked into some
really long timeout wait or something). The panic= option forces
the Linux kernel to reboot after a "panic" (a critical error
condition detected by the kernel, usually a corrupted table that
fails its consistency and integrity checks).
Normally the kernel would just display a "panic" message and sit
there waiting for human intervention. These are very rare (other
than the old "VFS kernel panic, unable to mount root" that occurs
when you have your kernel misconfigured for your arrangement of
hard drives --- or when you change the hardware setting of your
disk drives without updating your kernel (with the 'rdev' command
to set the root device flags) and/or without updating your LILO or
LOADLIN commands (which are usually used to pass these flags to
your kernel to over-ride the compiled in defaults).
Other than that common case I think I've only seen one or two Linux
kernel panics in the last 6 years. I've only had about a half dozen
unexplained system lockups over that period --- and that's on about
fifty Linux machines that I've managed during various portions of
that time. These lockups might have been panics in situations that
were so bad the kernel couldn't even display an error message,
there's no way to know).
I've only had to reboot unresponsive Linux boxes about a dozen or
so times in all the years I've used it. This was only a problem in
the late .99 and early 1.0x kernels when I was running a very busy
FTP/Web server that was simply overloaded -- the TCP/IP stack would
get so congested that the system would timeout between my login
name and password --- at the console (I'd've loved a working SAK
--- secure attention key back then). I was glad to see the major
TCP/IP re-write in between 1.2 and 2.x.
I'm not trying to tout Linux' horn here --- (well, maybe a little).
The point is that I don't get panics and lockups often enough to
see how the panic= parameter and the softdog/watchdog code would
work in those situations.
However, if you enabled the panic= and/or the softdog kernel
option, you may see that the machine reboots without a minute or
two after your lockup (wait for ten or fifteen). This tells you
that some part of the kernel was still running (and that the
hardware isn't completely wigged out).
Beyond that the things to do are to take out all non-essential
hardware (the sound card would be a great choice --- and the SCSI
card, since you mention that your Linux partitions are on the IDE
drives. As with most technical computing issues the it eventually
boils down to a matter of cost. You mentioned a couple of times how
you don't want to spend money on solving this problem. Ultimately
the time you spend fighting with it translates to money --- and
you'll have to eventually ask what your time is worth.
(The deeper part of this question is that you may find that your
home machine isn't worth the time or the money and you may content
yourself to just use any machines that you encounter at work, or
whatever. Strange as that sounds I've had friends who refuse to
keep a computer around the house specifically because they "spend
enough time with them at work" and feels that "home is for family
time").
At the same time I don't recommend throwing replacement components
at the problem without understanding the nature of the problem.
However, it may be that the best solution is to replace the
motherboard and/or the video card and/or the RAM.
Troubleshooting computers is difficult work. Whole books have been
devoted to the subject (I like the Win L. Rosch Hardware Bible
personally --- read it years ago and should probably get an updated
copy). There are also parts of the process that can't be gained
from any book --- that you must learn by experience and figure out
through some combination of analysis and intuition. As our
computers become more sophisticated the balance seems to lean more
for the intuition.
_________________________________________________
(?) Compression Libraries to Link into a C Program
From Corne van Biljon on the linuxprog mailing list on 30 May 1998
(?) Hello
I would like to zip a file, specified by the user, from within a C
program. Currently I use the system() command to invoke gzip. Is there
a compression library or routines out there somewhere, or is there a
better way of doing this ?
Thanks
(!) I can understand your concerns.
The system(), and popen() calls are notoriously insecure and can be
used to subvert your program to the users' will.
I would have started with some Yahoo! and Alta Vista searches
(actually I used Google --- a new and interesting search engine at
Stanford University: http://google.stanford.edu/).
The obvious phrase would be "+free +compression +library" (and
reasonable variations).
I get a bunch of links to the PKware Inc. pages (which are
presumably shareware and/or commercial) and then I find a link to
the zlib pages (which declare that they should not be confused with
the Linux zlibc compression libraries).
The zlib home pages are at:
http://www.cdrom.com/pub/infozip/zlib/
... and appear to be gzip compatible, and co-written by the primary
author/maintainer of gzip. However the impression I got from this
page is that zlib is not under a GPL or is under an LGPL --- that
your zlib linked code will not be encumbered.
Naturally you'll want to read the licenses yourself.
This zlib home page also has numerous links to other compresssion
software and programming resources.
_________________________________________________
(?) LOVE THE NEW LOOK!!!!
From David Rudder on 28 May 1998
Heather,
I love The Answer Guy's new look! Um, 'nuff said :)
-Dave
No Trespassing
4/17 of a haiku
(!) Glad you like it. I've been working pretty hard on it this
month, and I hope a lot of other people like it too.
So folks, what do you think of the footer? Does the double-footer
on these questions (a nav area for hopping amidst Answer Guy
entries, and the regular LinuxGazette section footer) make sense?
Should they be combined? Should the sectional footer only be shown
at the Answer Guy index?
For this month, I'll make it the same as last, because I kind of
like it... but you, the readers, should definitely let me know if
it's giving you trouble. Thanks and cheers are also welcome :)
Heather Stern
_________________________________________________
(?) Linux PPC on the Umax C500 SuperMac: Not A Good Idea
From Fahimy on 28 May 1998
Hello, I'm a french girl beginning some computer studies. I like
Macintosh so I'm looking after a second hand macintosh or clone in
order to work and learn C, Java and Linux on it. I'm perhaps about to
buy an Umax C500, but I'm wondering whether it would be able to run
linux. From a request to altavista, I found you were in a similar
situation some month ago. Quoting a message you sent to the linux-pmac
mailing list :
(!) Someone did send me a kernel that should be able to boot that
system. However I have had other things to keep me busy.
More importantly I can't recommend the Umax Mac clones at this
point. They have announced that they are discontinuing their whole
line of MacOS clones. So you'd be buying an orphan.
I'd suggest an Apple G3 based system --- though I'm still
disappointed about the lack of Mac clone manufacturers. I don't
believe Apple will survive if it is the only supplier of its
platform. On the other hand the G3 is the fastest processor out
there in a commodity microcomputer. In addition I've heard that IBM
has demonstrated a 1.1Ghz (1100Mhz!) version of the G3 architecture
in their labs --- so there is plenty of foreseeable future for this
platform.
As usual we'll see. One nice thing about Linux (and Unix in
general) is that it doesn't constrain us much in our choice of
hardware. We can migrate to a new hardware platform with little or
no effect on the majority of our utilities and applications --- and
a correspondingly modest learning curve.
_________________________________________________
(?) Remote lpd from Solaris to Linux
From kuksi on 27 May 1998
I like to print from Solaris to Linux. The /etc/hosts.lpd file
contents the sun IP address. I have installed the Linux printer on the
Sun by remote printer. It works fine, but when i print to the remote
linux printer, it is fail.
(!) I presume you mean that it works fine "locally" but fails from
the remote clients.
(?) the contents of the /var/log/message file:
linux_machine_name kernel: lp1 at 0x0378, (polling)
and the next time:
linux_machine_name lpd[number]: sun_machine_name recvjob
linux_machine_name lpd[number]: sun_machine_name request printjob
linux_machine_name lpd[number]: sun_machine_name request displaylong
(!) I guess this is a hacked up excerpt from one of your /var/log/
files.
(?) But the printer in local mode on linux works fine. (think i am :-)
)
kuksi
(!) Well, 'lpd' is black magic to me. I've got my remote printing
working on one pair of systems but not on another. Also 'lpd' seems
to be a security nightmare that's almost as bad as the older
'sendmail' releases.
One possibility would be to try installing LPRng (the next
generation of the lpr suite). I've printed out the manual for it
(over a hundred pages long) and worked through a bit of it. It does
seem to be an industrial strength printing/queueing system. Aye,
but there's the rub, it may be overkill for your situation.
So, all I can suggest is that you make sure that you've followed
all of the steps and suggestions in the Printing HOWTO and that you
try to get more specific debugging data.
____________________________
(!) On Wed, 27 May 1998, Jim Dennis wrote:
So, all I can suggest is that you make sure that you've followed
all of the steps and suggestions in the Printing HOWTO and that you
try to get more specific debugging data.
(?) Thanks for your e-mail. I have read many HOWTO about this, but I
am going to try everything.
(gondolom en :-) )
kuksi
_________________________________________________
(?) User Shell on Virtual Console 1
From Todd Blake on 27 May 1998
I like most people am the only person to use my linux system at home.
What I'd like to do is when my system is done booting to have me
automatically login as my main user account(not as root though) on one
virtual console(the first) and leave all other consoles and virtual
consoles alone, so that someone telnetting in will get a login prompt
like normal, just that I won't. I'd still like the other vc's have
login's for others to login and other reasons. I've tried just putting
/bin/sh in /etc/inittab and that didn't work, and I'm stumped. Does
anyone have any ideas on this?
Todd Blake
(!) Almost right.
If you want this to "always" be running (i.e. when you type "exit"
from that shell the system "respawns" a new shell under your UID --
you can use the 'open' command something like so:
# Run gettys in standard runlevels
## 1:12345:respawn:/sbin/mingetty tty1
1:12345:respawn:/usr/bin/open -c 1 -w -- su -c - todd /bin/sh
_________________________________________________
(?) Linux Memory Usage vs. Leakage
From Kevin Monceaux on 27 May 1998
Dear Answer Guy,
HELP!!!!!!!!!!!
I really enjoy "The Answer Guy" column, and I hope you can help me
with this one. I'm running Linux 2.0.29. I've been using this version
for quite a while now. Up until now everything's been fine. A couple
of days ago the problem developed. What appears to be happening is
that when programs are run they are not deallocating the memory they
used. Upon first booting the system there is already almost 9 megs of
RAM in use. I've run free to check the memory usage, ran another
command, such as ls, then ran free again and the free memory
decreases. I've noticed that if I run the same command, such as ls,
again the memory usage stays the same. It's only when commands that
haven't been executed before are run that the amount of free memory
decreases. It doesn't take long before I'm out of memory and have to
reboot. Any suggestions you could give me with this problem would be
greately appreciated.
Thanks in advance,
Kevin Monceaux
(!) If you suspect a memory leak I highly recommend getting a log
of your 'free' or 'vmstat' output before and after a few commands
-- several snapshots.
You can make a cron job to mail you a snapshot of this every hour
or so. You might want to append the output of a ps command to each
of these e-mail snapshots.
Unfortunately it isn't as easy to interpret the output of these
commands as it should be. It's entirely too easy to misinterpret
the output fields from them -- since Linux normally uses most of
the available memory for file cache buffers -- and large portions
of the shared libraries and memory allocated to forked process is
shared (the memory manager uses "copy-on-write" and other
techniques to minimize the utilization of physical memory). This
makes correlating actual memory usage difficult.
You can also use 'top' (which is a curses process viewer). It can
show you the current state of the system and sort by memory (M) or
CPU utilization (P). You want to isolate the specific process(es)
that is(are) causing the problem. Don't leave 'top' running
unattended, however, since it is a bit of a resource hog in its own
right.
If you do isolate this to a particular program you'll want to see
if there are updates available for it, or for any of the libraries
it uses. You may also want to consider getting a newer kernel ---
such as 2.0.33 or (if it's ready by the time you read this:
2.0.34).
Sorry I can't be more specific --- but you'll have to narrow down
the problem a bit before we can do more. Incidentally you can start
up in single user mode and manually start all of the daemons and
processes that you normally run your multi-user (initdefault) mode.
Do this slowly, one command/daemon at a time, to see when the
problem first appears. If it happens right away then boot with the
-b option to prevent the execution of any of your boot up scripts
and manually load any kernel modules you're using one at a time.
_________________________________________________
(?) tv cards and dual monitor
From Desperado on 27 May 1998
Hi!
Did you hear about TV cards in LInux? am I dreaming?
(!) I've heard about them. However, I don't have one to play with
and I haven't even found a decent HOWTO or website to explain
what's required and what's broken (if anything). [At press time,
the Hardware Compatability HOWTO section 22.5 mentions some
programs that support several TV tuner cards. It's mostly pointers
to tgz files, though, not real help with setup. -- Heather]
(?) What about dual monitors? In WIndows 98 it needs at least a PCI
bus, but what about 486 users? I found something relative for Linux
(multimon or something like that) but It works with a black and white
video card (don't remember exactly), anything to work with two monitor
in Linux, using two ISA video cards?
(!) As I've explained before, the classic situation with PC and
multiple monitors used to be that the you couldn't put two VGA
(actually any combination of two VGA/EGA) cards into the same
system. Thus you could put a monochrome video card (text only or
"Hercules" MGA) into a system to co-exist with a VGA or EGA.
Frankly I don't remember where CGA was in this mess, though I could
look it up if I really cared. I personally never used CGA --- it
was just the worst of all worlds.
The 'multimon' patches for the Linux kernel are very old -- and
probably haven't been updated to the 2.0.x (much less the 2.1.x)
kernels. I've never used them. I seem to recall that it only
applied to using a system with one VGA (or EGA?) card and one
"Hercules" MGA (monochrome graphics adapter) or possibly an old MDA
(text only monochrome display adapter --- the original IBM video
card).
Another approach that used to be possible was to use very
specialized adapters like the old TIGA (Texas Instruments Graphics
Array?) or DGA (?) cards. These were high resolution graphics
adapters that cost thousands of dollars and weren't compatible with
VGA or any other "standard" cards or software.
However I've never heard of Linux (XFree86) drivers for TIGA or DGA
cards --- and I'm not sure if they are still in production. In fact
I don't actually know anything about these old beasts --- I just
vaguely remember some discussions I had with other nerds back in
the late 80's where the subject came up.
When I last discussed this in LG (many moons ago) I didn't know
that some of the modern PCI video cards had the option to be used
in a "non-VGA" mode. Thus you can take some PCI video cards
configure them to co-exist in a system with another VGA video card.
I have heard that some of the commercial X servers support multiple
physical displays on some cards. I don't seem to recall any of them
for XFree86 --- but a search of their web pages:
http://www.xfree86.org/
...would provide a far more definitive answer.
The last I read none of the XFree86 servers support multi-headed
operation. This is from the following entry in their FAQ:
http://www.xfree86.org/FAQ/index.html#TwoCards
I have yet to see anyone using this feature. One of these days I
might try it. However, not this month.
The Commercial vendors to check with would be:
Xi Graphics (formerly X Inside):
http://www.xig.com/
... and:
Metrolink:
http://www.metrolink.com/
(there may be others but these are the two that I think of when I
think of the commercial X servers for Linux).
BTW: Metrolink didn't appear to have any online FAQ or web site
search engine. However Xi's FAQ lists a sample configuration for
use with two Matrox Millenium cards at
http://www.xig.com/support/faqs.servers.html#Anchor-a5
(?) What is inetd? when I am trying to install the ftp rpm, I get the
message "you need inetd", but in my Red Hat 5.0 CD, in the RPMS
directory there is nothing similar to that name.
(!) That sounds wrong to me. I would expect that message from the
ftpd (the FTP Server package). The default ftp client should be a
part of the NetKit package (probably in the base RPM).
'inetd' is a IP service dispatcher. It listens to a list of TCP/UDP
ports and dynamically launches programs as connections are
requested for the corresponding "well known services" The mapping
of ports to services is done via the /etc/services file, and the
mapping of programs (daemons) to services that will be managed by
inetd is in /etc/inetd.conf.
In all of the major Linux distributions most of the the inetd
services are configured to run tcpd (TCP Wrappers). This utility
will check the the IP address of the client that is making the
connection request against one or two lists of rules
(/etc/hosts.allow and /etc/hosts.deny). 'tcpd' also makes some
sanity checks, for example to see if the client's reverse mapping
(a DNS request --- gethostbyaddr() actually --- matches one of the
addresses that's returned by a forward mapping (gethostbyname).
That's called a "double reverse lookup" and is somewhat more
difficult for an attacker to "spoof" than just a reverse
(in-addr.arpa) entry.
Are you trying to use an ftp client or a server (daemon)? You might
also try ncftp (Mike Gleason?) which is a nice curses mode (full
screen) client. You can also try lftp which has some nice scripting
features. In fact ncftp also has some rather handy features for use
in scripts.
Another option is to use mc's (midnight commander) ftp features. To
do that just load the program and type cd ftp://..... (the URL form
of the ftp site's name).
Shortly thereafter you should see the files and directories from
your FTP site appear in one of mc's navigation panels --- you can
than navigate the other site, tagging, copying, and managing the
remote files as though there were in a local directory tree.
(?) One thing more, what about download managers? I use Get Right, but
there is no version for Linux, well there is no Java Runtime
Environment for Linux. Any other good application for that?
(!) I presume you mean that you'd like to select a number of files
in an ftp client and have the system continue to try downloading
('get'-ting) them until they are all successfully retrieved.
Perhaps you'd even like to just tag the files and defer the actual
download until later (say, late at night when there's just less
bandwidth in use all over the 'net).
I think there are many programs that can do this. I've used
'mirror' (Lee McLoughlin's Perl script) many times --- but that is
more of a programming utility and it has no interactive front end.
The best bet would be to search the Linux Software Map
(http://www.ssc.com/linux/apps.html) with the words "ftp" and
"client"
I suppose it would be nice to have an FTP client that had an option
write all your file selections to a file and execute the fetch
later as an 'at' job. Perhaps one of our readers will know of one.
Also there is quite a bit of Java support for Linux. I don't know
about the JRE specifically but it appears to be supported according
to the canonical Linux/Java site (http://www.blackdown.org):
Java-Linux: Javasoft(TM) Products
http://www.blackdown.org/java-linux/products.html
(?) Thank you for your help.
Desperado
(!) I hope that helps. Look at the Blackdown.org site for more info
about Java under Linux.
_________________________________________________________________
Copyright © 1998, James T. Dennis
Published in Linux Gazette Issue 30 July 1998
_________________________________________________________________
[ Table Of Contents ] [ Front Page ] [ Back ] [ Next ]
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
CHAOS: CHeap Array of Obsolete Systems
By Alex Vrenios
_________________________________________________________________
Introduction
If you are anything like me, you're probably not exactly sure how fast
the latest processor is. You probably didn't wait in line to buy the
latest Windows upgrade, and the machine you use to get your work done
probably doesn't look too good next to even the $1000 specials. Maybe,
like me, you need a little spice in your computing time slice.
This article describes a year-long project to create a network of old
PCs - a loosely coupled multi-processor, if you will - all for the
cost of a reasonably priced PC, a lot of my personal time, and a
little bit of luck.
Last year our vintage 1988 Deskpro 386s reached the limits of their
upgradeability, and they still couldn't run all the applications I use
at work. It was time to buy new. My wife gets the first new one this
time, and I'll decide to wait a little while longer. The two old
machines were top of the line, in their day. I still have all the
manuals, the original maintenance diskettes, and a few spare parts.
I'll be sorry to see them go.
_________________________________________________________________
It Begins
With the new PC up and running I found myself reorganizing things. I
went to my favorite computer store for a cable. Being one to avoid
paying for a new cable whenever I can, I went into the back room - the
salvage area - very much like a high tech junk yard. There, near the
corner, on a bottom shelf, were three Deskpro 386s, just like my old
ones at home!
I moved closer. (Didn't want to cause a scene, you know.) Each was
priced from $100 to $150 and the stickers were yellow. The big sign on
the wall said yellow means I can take another 20% off. I did some
quick mental math and decided to offer him $300 for all three of them.
"Excuse me," I said. "The old Deskpro 386s in the corner?" "Twenty
bucks apiece," he said. I put my credit card on the counter and the
PCs in my trunk. He even threw in three AC cords. After the deed was
done, I heard myself asking about others like them because I had to
build a home network. He even agreed to give them to me at the same
rate!
I took them home, took them all apart, and blew out some nasty dust.
The cases cleaned up like new with a little spray cleaner. (Okay, a
lot of spray cleaner.) They all had at least 40 MB hard, and standard
floppy drives, and some even had extra memory. Every one of them
booted, and all the hard drives reformatted properly. This was surely
an omen.
The cheapest network cards I could find were NE2000 compatible 10Base2
at $29 each. I got commercially made coax cables because I know what I
can do to a BNC connector with a soldering iron. Where was I going to
put all this stuff?
_________________________________________________________________
The Plan
I have a desk, credenza, and a side table in my little office area at
home. The side table happens to be wide enough for three PCs to sit
side-by-side under it, on floor pads. I cut a shelf to fit under it
and got two sliding keyboard drawers for the top. Two on top, with
keyboards and monitors, three on the shelf, and three on the floor
makes eight - that's a nice sized network. I got a pair of 1x4 data
switches to connect the pair of VGA monitors and keyboards to each set
of four machines. Mice do not switch well, so only the top two
machines have them. For what I wanted to build, a lot of mice were not
necessary anyway.
I found three more matching 386s and a very clean Deskpro 486 that I
just couldn't pass up. (It even had a CD-ROM drive!) My final
configuration uses the 486 as the "build" machine, seven 386s as the
multi-processor test bed, and the eighth 386 as a spare. The two
monitors, keyboards, and mice look good up top. The matching PCs
underneath look very natural. The rats nest of wires are tucked out of
site.
The Red Hat Linux version 4.2 box said it would work in character mode
on a 40 MB hard drive, but required 8 MB of RAM to run. I did some
quick combinatorics and bought the minimum number of memory chips that
would bring every machine up to that standard. Time to saddle up.
I used a DOS boot diskette to bring up each machine, establish the
type codes for the hard drives, and initialize the network cards. Each
card came with a tee connector, and the coaxial cables went together
quickly.
I got a small label maker and named the 386s after the seven deadly
sins. The 486 was named omission. A local sysadmin friend said
192.64.9.1 through 192.64.9.8 would do fine for my IP addresses. This
was starting to look pretty good.
_________________________________________________________________
The Installation
I've done my share of software installations, including a few
operating systems. Red Hat tries to make things as easy as possible
for the reasonably experienced person, so I expected an easy time of
it. Not true.
In hind sight I guess it all makes perfect sense, but there were a few
dark moments. Asking for a "Default Gateway" and a "Primary,
Secondary, and Tertiary Nameserver" was a bit over my head. (I got
eight machines on a private network. I don't need no stinkin'
nameserver... Do I?) And a friend had to set me straight on how many
partitions I really needed, explaining how a single partition
containing a swap "file" works fine under Linux. Oddly, the
installation program doesn't ask for NFS mounts if only one partition
exists. (It seems to me like that's when you need them most.) I had to
add this information manually to the /etc/fstab file after the
installation was complete. I updated the /etc/hosts file and switched
both accounts to use the C shell, while I was at it.
I still haven't a clue how to create a Linux boot disk. Nor do I
understand the "rescue" mode on the installation boot floppies. When
the network card "autoprobe" actually recognized my NE2000 compatible,
however, I knew this was all going to work out fine. And when the
second machine started reading the CD-ROM drive in the first one, I
got a little smug.
When I got to one of the machines with a 40 MB hard drive, I
discovered that a 40 MB set of installation files doesn't fit. After
frantic posts on the news groups and the mailing lists, I discovered
that I could de-select some of the software components that I didn't
need and chip the installation set size down to 35 MB, which fit
nicely. With /home and /usr mounted through NFS from the big 486, I
had no fears of running out of work space. In addition to the root
account for maintenance, I created one user account for myself so I
could do the ordinary stuff.
_________________________________________________________________
The Network
With the evidence mounting, I still didn't really believe it all
worked until I actually switched to different systems and did pings
back and forth. When I compiled a simple client/server pair of test
programs, started the server on one, and the client on another, I was
convinced. This is good.
So what, you might ask, am I going to do with an 8-PC network?
I've taken a few graduate courses in distributed and fault tolerant
systems, and I read a lot. There is something I find fascinating about
a distributed algorithm: locally each of the individual processes
obeys the same set of rules, but globally the "system" exhibits an
emergent behavior. All these individual processes look like a single
machine to the casual user.
With sophisticated software running on each of the seven machines,
they can band together to form a single computer that runs application
software, taking advantage of the overlap inherent in most algorithms,
by running a piece of the whole on each machine, collecting and
combining results as each of them completes. The "sophisticated
software" is called a distributed operating system, and the
application it runs has to be modified by hand in order to realize any
performance improvements. The January 1998 issue of Linux Journal is
dedicated to such systems. Beowulf clusters, discussed in that issue,
are within my reach, now that Red Hat released their Extreme Linux CD,
with the associated NASA code and documentation.
Beyond number crunching clusters, there are database server clusters.
The many machines are used to distribute the client transaction loads
so no one machine crashes from overwork. If a process fails, an
associated monitor process might restart it on the same, or some other
machine. And when one machine gets bogged down for whatever reason,
some of its processes might be intentionally stopped and restarted
elsewhere just to redistribute the overall load. This is leading edge
fault tolerant research material.
Finally, there are dozens of simple distributed algorithms along with
dozens of variations on each. Without any add-on sophisticated
software, one may use a C compiler and some UDP socket programming to
first imitate what has been done, then perhaps improve on it. I expect
this will be what I work on first. The seven 386s can each run a copy
of the algorithm under test, instrumented to write behavior trace
records, and the 486 can monitor these traces, displaying the global
behavior in some way that makes sense to me.
_________________________________________________________________
Conclusion
Whatever your computing interests, a hardware architecture must come
first. The current glut of high performance PCs provides us an
opportunity to build a system that fits our needs, without spending
too much money. The Linux operating system provides a substrate upon
which an interesting software project may grow. I recognized that a
small network of PCs would provide me with a platform that fit well
with what I think is fun. I hope my experience will encourage you to
pursue your own.
My next step is to define and construct a framework for my 486 to
become the monitor, sampling and reporting the behavior of some
distributed algorithm running on the other machines. Maybe that will
be the subject of my next article here.
_________________________________________________________________
Copyright © 1998, Alex Vrenios
Published in Issue 30 of Linux Gazette, July 1998
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Clueless at the Prompt
By Mike List, troll@net-link.net
_________________________________________________________________
[INLINE]
Welcome to installment 5 of Clueless at the Prompt:
How's it goin'? This month I'm going to go into some basic admin ideas
that you can use to make your home linux box a little easier to deal
with, especially in an emergency.
I'm also going to hit xdm on a couple of points, although I've got the
merest understanding of all the differences between it and running
from "startx"
_________________________________________________________________
*Boot/Rescue Disks:
If you make a mockery of your filesystem by typing for instance a
space between "/" and the file you were trying to rm as root, or your
extraordinarily gifted child discovers how to cold boot your computer
while you're in X and have a dozen windows open, or you accidentally
kick the power strip while you're stretching, or geez, need I go on?
You may discover one of the handiest things (two, actually) is a
Boot/rescue/root disk combo.
To use these , you simply reboot or cycle offthe computer as
"gracefully" as possible. Insert the boot disk, which can be your
install boot disk and start the bootup. instead of using your install
root disk, you drop a rescue disk in when prompted. These disks can be
gotten from your distribution's ftp site or more conveniently, if you
have linux on CDROM, guess where you might get it, most likely
wherever you got your boot image. You can also roll your own, or use
Yard to make a custom rescue disk. As a side note, you can use this
method to make a usable, if not very flexible, minilinux system,
something like xdenu.
To use your rescue disk, boot up with your installation bootdisk, and
when you are prompted to insert root disk, just pop it in and hold
tight a second. when you get a prompt, you are almost ready to fix
your problems.You can run fsck on a disk partition without mounting
it, in fact that's the safest way to use it. If you bollixed any init
files, you can mount the /dev/?partition to "/mnt" cd /mnt and using
vi, edit the mistaken lines or even recreate them if need be. One
important note; your hard drive will be mounted below /mnt, so don't
do anything to "/" or you risk hosing your rescue disk, not a nice
thng to do when it's all that stands between you and your linux
system.There is even a defrag utility that you can use after you fsck
your filesystems, but you must make sure that you only run it on
UNMOUNTED filesystems or you'll be subject to a REAL LEARNING
EXPERIENCE!, as in learning to reinstall your box from scratch, or
practice on your backup routine.
_________________________________________________________________
*Xdm:
If you ever thought it would be cool to be able to start your linux
box in X mode and logging in from an X screen, it isn't too hard to
do, using xdm, the X Display Manager. You can easily start it by
simply typing:
xdm
if you have X configured. It should start with a login screen, and
look like the twm window manager, tweed background and all, and when
you login for the first time, it probably will be twm, particularly if
you got your XFree86 distribution from xfree86.org, although I'm
really only accustomed to Slackware so another window manager might
come as default in say, RedHat or SuSe, or Debian distributions. that
can be changed, as can the tweed root window, and the login message,
and a great many other small details. You should be aware that because
of the way xdm invokes X, the path will not be the same as if you run
startx. That means that you must either specify the full pathnames for
executables or change your path in the" " file.Your access to remote
xhosts will be different as well, a problem I haven't licked yet but
by the time you finish reading this I might(or you might, or we both
might or ...).
Better yet break out your favorite editor, save your /etc/inittab file
in case of disaster and find the line that starts with id?:default
runlevel? and read the file down a few lines to where it describes the
runlevels and change to the one that describes X11R6 in Slackware it
would be runlevel 4, so change the id and the runlevel description if
you are a slacker, it may be different on other distributions, since
they mostly use BSD style init and have their rc.files directly in
/etc. That's enough to start Linux in a login screen.You could run
locate xdm
to find the xdm files, and give them a good lookover. The files you'll
want to look at are the Xresources, Xsession, and Xsetup_0. There are
other files to work over but let's start with our local desktop.
If you look at your Xsession file you'll see you need these files in
your home directory:.xinitrc and .Xresources. The .xinitrc you may
have in your home directory is a reasonable default, and you can copy
the system Xresources to your home .Xresources. If you would like to
use other files as your startup and resources file, you'll need to
specify them in the Xsession file, at the lines that read:
startup=$HOME/.xinitrc
resources=$HOME/.Xresources
you might use .openwin, instead of .xinitrc and .Xdefaults instead of
.Xresources, you'll have to lok at what X related dotfiles are present
in your home directory.
Your Xsetup_0 fle can be used to start a background image in the login
screen using a command like:
xv -root -quit /your/image/here
assuming that you have xv installed on your system. You can use other
viewers to start the image, but you will have to read up on the
appropriate command line options for them. You can also enable or
disable the xconsole log, which can be used to notify you of errors in
execution, etc, by piping the xdm-errors file to it in this file,
although I haven't done it and am not real familiar with the the
specifics.
_________________________________________________________________
If you have any starting out type questions, or any tips you you think
would be handy for the newbie reader, please email me at:
troll@net-link.net
See you next month!
_________________________________________________________________
Copyright © 1997, Mike List
Published in Issue 30 of the Linux Gazette, July 1997
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
8 Reasons to Make the Switch
By Bill Bennet
_________________________________________________________________
Here are 8 reasons to switch to Linux, the free OS:
1. It is free. Download from the Internet and install it now.
2. Free upgrades. Find the Kernel on the Internet and download the
latest guts for the system.
3. It runs Win3x,95,98,etc. Some programs, not all. Your programs
will run and look the same as in Windows. Find the Wine project.
It is trying to be 100% free of Microsoft code, in order to
further promote freedom of action on the PC. They use the API of
Windows, and write the code for free! When a Windows program won't
run in WINE or WABI the Linux system can be installed AFTER you
install Windows. Then the LILO bootloader can boot either Linux or
Windows, without any upset at all of your delicate and unstable
Windows setup. A dual-boot PC is able to run almost everything and
it tastes great, but is not less filling. Create with the GIMP,
post it to the net, save it in DOS and use it in your office
suite.
4. It runs DOS. Some programs, not all. See dual-boot, above. Your
programs will run using the Dosemu. It makes the programs see a
DOS system on your machine, and they go. Yes, even Warlords II
will run just fine. You just need a paid-for DOS version to
install and a hard disk partition is recommended.
5. It runs Unix. Your Linux is a PC version of the powerful Unix OS.
The universities, NASA, the research institutes, computer
scientists and software developers are using it since the old days
of computing. You now have access on the Internet to thousands of
programs. They range from obscure utilities to fully developed
productivity systems. Oh, by the way, they are free to download
and are written by the best minds in the computer world.
"Microserfs"(recruited by the monopoly), are best left in their
circular, singular limited world so that the real free thinkers
can write you great innovative, unlimited programs that can solve
real world problems.
6. It runs Macintosh. Yes, you just get the emulator and your Mac
programs will see a Mac system on your PC. Playmaker Football,
anyone?
7. It is fast when used as a network server or for multi-tasking. The
ISP (Internet Service Provider) community is becoming a large
growth area for Linux, with over 20% of them using it. That
percentage is growing as the mainstream shrinks. The choice of
Linux as your office productivity system is really a no-brainer:
Speed, Versatility, Price (free), Upkeep (free), Support (free on
the Internet) and Adaptation. Your upgrades are free and you keep
up with all the innovations in the realm of computing by virtue of
your ability to run all the different operating systems and their
software on one machine. Any questions?
8. You contribute to the expression of freedom of thought and action
when you choose Linux, the free OS. By way of contrast, just ask
yourself 'How many times have I paid for an upgrade of my
system?'. If the answer is one or more, then you paid too much.
Again, ask yourself 'Did I need to upgrade when the owners of the
OS told me to upgrade?'. If your software was running just fine
when you were told to upgrade, then who is running your life?
Finally, ask yourself 'Does following the dictates of the
Windows-Intel monopoly make me an independent PC owner?'. If you can't
run a piece of software that sounds like it does what you want done,
because it is not available for your "operating system", then why do
you continue to let yourself be limited by the owners of the monopoly
"operating system"?
Switching to Linux lets you run the software that you hear about and
lets you choose which programs you want; which programs you need; and
most important, when to buy them.
Staying on Microsoft's schedule, for example, will have seen you
purchase four upgrades to your "operating system" in the last ten
years. DOS 6.22, Windows3x, Windows95 (DOS 7.0) and Windows98 have an
inevitable progression built into their "release" so that you give
your money to the richest man on the planet on a regular basis. That
regular flow of cash is keeping Microsoft solvent, paying the
investors and limiting choices for the 90% of PC users who are trapped
in the Microsoft endless loop of upgrades.
Why am I so adamant in my condemnation of the monopoly? The reason is
that in May of 1998, Microsoft "released" Windows98. That caused a
huge buying surge for Microsoft, because their captive users were
truly afraid of being left out of the "innovation" loop. At the same
time, a press release on the TV claimed that Windows98 had fixed three
thousand (3,000) bugs in the Windows95 "operating system". Only a true
monopoly would even let you know that you had been inflicted with
three thousand (3,000) bugs in your last software purchase. To top off
the irony, the United States government and 20 of their states were
taking Microsoft to court on anti-trust suits over their exclusion of
choices for consumers on which browser to use on the Internet.
That left me with the logical question of whether you PC users had a
choice of how to run your PC and get only the programs that you want
or need. The answer is that the lawsuits are illogical, since you the
consumer can run Linux, use just the Microsoft programs you want and
run any browser you want and run any system you want, all on one PC.
Therefore, Microsoft can wedge their captives into any type of mess
that they wish, simply because you can choose to run Linux and still
be connected to the masses by virtue of your versatility.
Your business can run the same software as your contacts and share the
same type of files and be totally connected, even with the extra 10%
of the market that is not on Windows-Intel. You win and you win with
Linux.
_________________________________________________________________
Copyright © 1998, Bill Bennet
Published in Issue 30 of Linux Gazette, July 1998
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Integrated Software Development with WipeOut
By Gerd Mueller
_________________________________________________________________
Programming under Linux means you have the choice between various
powerful development tools, such as gcc, gdb, make and a lot more. But
most of them are command-line tools and especially for beginners not
easy to handle. But this is only one side. The other important point
is the source code editor. Many people swear by vi (or one of its
clones) - I don't mind. But it's not enough to have a good editor, a
compiler and a debugger. Especially when you develop larger software
projects you need a tool to organize it efficiently, version control
would be recommended and for object-oriented languages a class browser
is indispensable. Now you have to get all these tools under one
umbrella. I see mainly two ways to resolve this problem: one is called
(X)Emacs and the other is an Integrated Development Environment.
You can resolve nearly every problem with Emacs (may be in the near
future it will cook coffee for you ;)). It is not only an editor: you
can use it as mail tool, news reader, file system browser, debugger,
compiler and make tool. If you don't have a special feature it is
possible to extend Emacs to fit your needs. But in my opinion it does
all these things not very easily and intuitively. So I prefer the
second way for software development as mentioned above.
In my understanding an Integrated Development Environment gives you a
graphical interface to the different tools and joins them in one
environment. The tools should help me to organize my projects and
support the code-compile-debug cycle. Advanced features are things
like a GUI-builder, documentation tool and maybe a CASE-tool like
behaviour.
Under Linux you have the choice between various IDEs. They are more or
less powerful and more or less expensive. In this article I'd like to
introduce one of them which is called WipeOut.
[INLINE] History
At the beginning there was just the idea to have some nice and easy to
use frontends for the main development tools of each C++-programmer
under Linux: gcc, gdb and make. We decided to use wxWindows [1] by
Julian Smart to program the GUI. The first versions of WipeOut were
developed with the OpenLook-variant of wxWindows. Later we changed to
wxXt by Marcus Holzem because most of the people didn't like OpenLook.
For basic data structures (container classes, strings, etc.) we
developed dmpack [2]. This library contains also features such as
streamable objects and remote method invocation, which are very
important to the communication between the various WipeOut components.
The communication part of dmpack is based on the socket++-library by
Gnanasekaran Swaminathan which provides an object-oriented interface
to sockets and pipes.
[INLINE] Components and Features of WipeOut
Now WipeOut is a complete teamwork development environment for C++,
Java, Eiffel, Fortran and C projects. (Other languages may follow.) At
the present it consists of the following components:
* Project Browser - the main window of WipeOut
* Revision Browser - version control based on CVS, it supports
remote repositories and teamwork
* e3-Editor - the central text editor with flexible syntax
highlighting
* Class Browser - supports Java and C++, used as source code
navigator and cross-referencer
* Debugger - a frontend for gdb/jdb, supports threads
* Make-Shell - a frontend for make, automatic makefile generation
* Symbol-Retriever - a comfortable grep replacement for symbol
searching in a quantity of files
* SurfBoard - our HTML-viewer to show help and man pages
WipeOut is not a RAD-tool. This is because it's not a pure Java or C++
development tool. The editor, the project management, and the make
shell work also fine for other projects than C++ or Java. Besides it
is hard to decide for a special GUI-toolkit, which would be necessary
for the C++ part. So we try to support the programmers with intuitive
frontends for various command-line tools and some important additional
components, which help for more effective programming.
WipeOut is available for various platforms: Linux/i486, Linux/m68k,
Linux/Alpha, Solaris, HP-UX. All these versions are fully compatible,
so that you can use WipeOut for multi-platform development.
Furthermore there are porting activities for LinuxPPC and Irix.
Currently we provide WipeOut in two versions:
* the free standard version: This version is free for non-commercial
use, but it has some restriction: there is no version control and
a project can contain only one module. All other components have
the full functionality.
* the WipeOutPro version: this version is unlimited and costs $149
for commercial users and $79 for private users.
[INLINE] Installation
The Installation is not very difficult. To run WipeOut you need the
following packages:
* wipeout-.tar.gz : this file contains WipeOut itself
* wxxt-share.tar.gz : this is the GUI-library
* wipeout-doc-.tar.gz : the WipeOut documentation
* wipeout-tut-.tar.gz : the WipeOut tutorial
You can obtain the packages from [4]. Then do the following steps:
* Create a new directory, where you want to install WipeOut
* Copy the packages to this directory and unpack them with 'tar xvzf
'
* Start the setup-program of WipeOut by typing './setup'. Follow the
instructions which the program will display. After finishing it
will create two small shell-scripts: You have to include
'wipeout.sh' with the 'source'- command into your '.bashrc' or
'.bash_profile' if you use bash or 'wipeout.csh' into your
'.cshrc'-file if you use csh.
* After opening a new shell, which uses the modified rc-file, you
can start WipeOut simply by typing 'wipeout'.
[INLINE] First Steps
After starting WipeOut you'll see the Project Browser to the left and
the (empty) editor to the right. The Project Browser is the central
part of WipeOut. Here you open the projects and start the other
components.
The first thing we do is to create a new project. Therefore we choose
the menu 'Project->New Root Module'. A dialogbox opens and we have to
input the directory of the CVS-repository (see below) and the
directory of our new project. After confirming WipeOut asks us for
adding the 'Makefile' and the '.def' file. We choose OK. What
this files mean we will see later. Now WipeOut creates a new module in
the repository and will initialize the project directory.
After creating the new project we now have access to all the other
components. At first we will have a look at the Revision Browser.
[INLINE] The Revision Browser - Part I
[INLINE]
The Revision Browser manages the modules and files of your project. A
module represents a directory and it groups the files logical. But a
module can contain submodules so that you can build a module
hierarchy, which represents your project.
The Revision Browser shows this hierarchy in a GUI-element called
browser box. This box was inspired by a similar GUI-element of the
NextStep-System and it's a clever and easy way to display a hierarchy.
Each listbox shows one level of the hierarchy. The top of the
hierarchy is shown in the most left listbox. If you select an item the
listbox to the right contains the children of it. With the arrow
buttons to the left and right of the box you can scroll through the
hierarchy.
The files belonging to a module are shown in the right listbox. If you
double-click on an item the corresponding file will be shown in the
editor. To be exact: the file will be opened with the default
application of its category (see below).
Every module has several properties. If you select a module and choose
Edit->Info for File or Module you get a dialog, where you can modify
the module properties. There are the following tabs:
* Categories: The files of a module are divided into categories.
There is an automatic assignment based on the file extension but
you can do it also explicitly. A file can be a member of more than
one category. Categories have mainly two meanings:
The first is, that each category represents a makefile symbol. All
files of a category are included in this symbol. In this way make
knows how to handle a file. An example is the default category
'CPP_source'. It includes all files of the module with the
extensions '*.C', '*.cc' and '*.cpp'. That means normally they are
C++-source code files. If you start to make a module, all files of
that category will be compiled with c++ (as the default
C++-compiler).
As a second point you can assign one or more applications to a
category. One of them is the default application (this is normally
the WipeOut editor e3). If you double-click on a file in the
filelist, this file will be opened with the default application.
If you select a file and click right you'll get a small popup menu
with all applications which are assigned via the category to this
file. By selecting one of them the file will be opened with it.
* Directories: Beside of its home directory you can assign other
directories to a module. These directories can have one or more of
the following meanings:
+ Source : this is a directory where the debugger searches for
source code
+ Header : this is a directory where additional header files
are located, the compiler will need this
+ Library : the compiler searches in this directory for
libraries
+ Make : WipeOut starts a recursive make in this directory,
when you start to make the module
+ Browse : marks the directory for the Class Browser so that it
will parse the directory for classes
The default properties of the module's home directory are 'Source'
and 'Browse'.
* Options: Here you can set some additional makefile options. If you
click on one of the buttons the editor shows the corresponding
line of the '.def'-file. You can set your compiler, compiler
flags, include and library paths, and libraries, which you want to
link to your program.
* Tools: Here you can set various tool properties. At present there
is only the make-command property: You can input here the
make-command which you like to use for making your project. The
default is of course a simple make but you may change it gmake or
pmake.
_________________________________________________________________
WhatYouSeeIsWhatYouGet in WipeOut
There is one point where WipeOut differs from many other programs:
Mostly settings will be persistent immediately, i.e. you don't need to
save them explicitly - real WYSIWYG. Just in some modal dialogs there
are OK- and Cancel-buttons, so that you have the choice to confirm or
to cancel. Furthermore you don't need to save property-changes you
made for a module, e.g. if you add a directory or category these items
will be immediately visible to all other components without saving and
closing the dialogbox. We think this kind of work is faster and
intuitive.
_________________________________________________________________
[INLINE] The aim of the game
Now we know some things about modules and categories. It's time to
turn back to practice and produce a little chunk of code. In our
example we will build up a small String - class including the
obligatory 'Hello World' - program.
To create the source code for a module we have three possibilities:
* do it the old-fashioned and good way - open a new file in the
editor and hack the code into it
* import existing files into the module
* create class and method headers with the Class Browser
We choose the last point, so that we can take a closer look at the
Class Browser. To open it just click on the third button of the
Project Browser.
[INLINE] The Class Browser
[INLINE]
The Class Browser is bases on an incremental source code parser. That
means your code can be incomplete or wrong, but the Class Browser will
scan it for classes, methods, and members as good as possible. The
Class Browser parses this project directories which you marked as
'Browse' (see above). After building-up the internal database the
first time, only such files will be parsed again, which changed or
depend from changed files and new files. You have to tell the Class
Browser explicitly to update the class hierarchy with
Hierarchy->Update (or faster the corresponding toolbar button). But
you must only update if you changed a class or method declaration.
Apart from the well-known browserbox, which is used here to show the
class hierarchy, the Class Browser has another WipeOut-standard
GUI-element - the panelbox.
The development of the panelbox was necessary, because time after time
there were to many non-modal dialogs which showed important
information. This wasn't easy to survey. The panelbox is clearly
structured and gives fast access to various information without
loosing survey. The panelbox consists of one or more subpanels. You
can assign the number and kind of the panels with the small buttons in
the top of it.
The Class Browser has five different subpanels:
* Methods : shows the methods of the current class
* Members : shows the members of the current class
* Hotlist : shows the hotlist (see below)
* All Classes : shows all classes of the hierarchy
* All Methods : shows all methods of the hierarchy
The hotlist is a collection of often used classes, methods and
members, so that you have fast access to them. With the menu item
Hotspots->Add Hotspot you can add the current class, method or member
to the hotlist. A double-click on a hotspot opens the corresponding
file in the editor. This is similar to all other listboxes. Beside of
this you are able to control the listboxes via keyboard: with the
cursor keys, 'Home', 'End', 'PageUp' and 'PageDown', but also with
alpha-numerical keys. If you press a letter the listbox cursor jumps
to the first item beginning with that letter.
But now we want to build our String-class: We do that with Edit->New
Class .... We input the name of the class and press Insert In New
File. WipeOut will ask us several questions, but we confirm all of
them with OK. The editor shows us now a new header file for our
String-class.
The next step is to add some methods. We do that with the Edit->New
Method ...-dialog. Simply input the method declaration as you know it
from C++/Java and set the editor cursor to the right places when
WipeOut asks you to do so. After writing some implementation code your
source files should look as follows:
Listing 1
// $Id: issue30.txt,v 1.2 2003/02/03 21:50:19 lg Exp $
// some comments ...
#ifndef _String_h
#define _String_h
class String {
protected:
char* _data;
public:
String(char *);
virtual String();
virtual String& operator= (char *);
virtual char* data() const;
};
#endif
Listing 2
// $Id: issue30.txt,v 1.2 2003/02/03 21:50:19 lg Exp $
// some comments ...
#include
#include
#include "String.h"
String::String (char *data) {
_data = new char[strlen(data)+1];
strcpy (_data, data);
}
String::String() {
delete _data;
}
char* String::data() const {
return _data;
}
String& String::operator= (char* data) {
delete _data;
_data = new char[strlen(data)+1];
strcpy (_data, data);
return *this;
}
main() {
String str ("Hello ...");
cout << str.data() << endl;
}
Because we generated the class with the Class Browser we don't need to
update the class hierarchy explicitly. Beside of this the Class
Browser added the files 'String.h' and 'String.cc' automaticly to the
module in the Revision Browser.
[INLINE] The Revision Browser - Part II
Before we continue just a few words about version control for those of
the readers, who are not familiar with it. The repository (we set its
directory while creating the new project) is the central database of
the version control. All developers get the actual source code version
from there. Each developer has a local copy of this version (or a
version of her/his choice) and she/he can edit it.
If the developer does a commit the local copy of the file goes into
the repository. Now all other developers have access to this new
version of the file. They have to update their local copy. If a
developer made changes at a file but didn't commit them yet and update
this file now, CVS merges the local copy and the actual version of the
repository. The developer will not loose his changes. After updating
the local copy may contain conflicts. That means the changes of the
developer collide with the changes from the repository version. The
developer has to resolve these conflicts (with support of the e3)
before committing the file again.
If you take look at the file list of the Revision Browser, you will
see there four files: 'Makefile', '.def, 'String.h' and
'String.cc'. All these items have a '[n]' at the beginning and empty
parentheses at the end. The signs within the brackets have the
following meanings:
* + : the file is up-to-date
* < : you need to 'Commit' the file
* > : you need to 'Update' the file
* x : there could be conflicts with other developers
* X : there is a conflict within the file
* n : the file was locally added
* - : the file was locally deleted
* ? : no status information available
Beside of the conflict symbols the signs have the same meaning for
modules.
The parentheses after the file name contain the version of the local
copy. If we select our module and commit it with Revision->Commit File
or Module the version numbers of the files change to '1.1' and the
status changes to '+'.
Especially on team development the status of a file or module can
change every time. You have three possibilities to keep the Revision
Browser up-to-date (Project->Module Properties):
* on item select: status update each time you click on a file or
module
* on tool select: status update each time you select the first
button of the toolbar
* timer interval: status update every x seconds
Note that each status update causes a CVS command. If you work with
remote repositories but you have only a poor connection to it, it's
recommend to choose the second possibility.
WipeOut has a lot other features for version control and team work,
e.g. you can create version branches for files and modules, you can
merge these branches again and you can assign symbolic names (tags) to
versions. You have various possibilities to import existing projects
(with or without CVS). All these things are described in the WipeOut
documentation and with a little patience it should be easy to find
them out.
Before we now compile our small project we'll take a look at the
editor.
[INLINE] The Text-Editor
[INLINE]
The editor is a central component of WipeOut. You use it for source
code editing and whenever another component needs to show a file or to
know a source code position it uses the editor. This is one of the
basic concepts of WipeOut: only one central text editor.
This causes a very high integration of the editor into the development
environment. That's why it is not possible to use another editor
within WipeOut. This seems to be a great disadvantage because the most
of the developers have their own favourite editor and it's not easy to
turn to another one. But this way it is possible to integrate such
features as the symbol completion: If you press 'Ctrl-.' in the editor
it will try to complete the word you are writing currently. Therefor
it uses the database of the Class Browser and looks for a matching
class, method or member name.
Another nifty feature is the integrated man page viewer: If you select
a symbol in the Text-Editor and press 'Ctrl-m' the SurfBoard will show
the related man page if there is one. In the near future we will
extend this to info pages and external HTML-documentations.
Syntax Highlighting
The highlighting of syntactical elements increases the readability of
source code. The WipeOut-editor uses regular expressions to do that.
The syntax is similar to the 'grep'-command. The documentation of
WipeOut contains a general overview of the meta-symbols. This kind of
highlighting slows down the editor a little bit but it gives you the
flexibility to create your own highlighting style.
A style is a set of a regular expression, a file pattern, a color and
a font. Each style highlights a special syntactical element specified
by the regular expression in the given color and font, but only in
those files, which match the file pattern. You can create and edit
styles with Properties->Highlighting.
The editor has default styles for C++, Java, Objective-C, LaTeX and
HTML. You can use this styles as they are, but you can also change
them. There is a lot of space for experiments.
Beside the styles there are some other parameters: various colors,
tabs, undo-depth, font sizes, etc. Finally the editor is very easy to
use, so that there shouldn't be to big problems.
[INLINE] The Make-Shell
[INLINE]
But now back to our mini-project: after creating files, adding them to
a module, working with the Class Browser and the Text-Editor it's time
to compile the program.
WipeOut uses make to compile your projects. Normally all necessary
files will be created automaticly based on the module information.
Each module has three different parts for the makefile:
* Makefile.inc: contains various symbol definitions, e.g. each
category gets a symbol, which contains the files of the category.
The makefile uses this symbols to determine, which files to
compile for a special target. This file will be updated
continuously be WipeOut and you should not edit it.
* .def: contains platform depending parameters. '' will
be replaced by the OS-name, e.g. Linux. When you develop a project
with WipeOut across various platforms this file helps you to make
the platform specific compiler settings.
* Makefile: This is the makefile itself. It includes 'Makefile.inc'
and '.def'. You can edit it for adding other than the
default targets. The Make-Shell will call make with this file. The
makefile contains default rules to compile C-, C++-, Java- and
flex-sources. It has targets to build C/C++- and Java-programs and
static or shared libraries.
To start make we open the Make-Shell with the 5th button of the
Project Browser. This dialog has only a few elements: the Start-button
to start make, an edit field to set a special target (an empty field
means the default target - 'cplusplus') and two checkboxes. Normally
Make-Shell shows only compiler errors and warnings in the lower
listbox after make has finished. If we check 'All Lines', we get all
the make output in the listbox. If we check Progress Window, a small
output console opens while compiling and shows the original make
output.
We compile our program simply by pressing Start. After finishing the
error listbox shows us a warning about a virtual constructor. We click
on the warning and the editor cursor jumps to the error line. We
recognize that the constructor should be a destructor and so we
complete the implementation and declaration with the tilde-letter. We
compile again and now everything should be okay.
To test our program we open the shell of the Project Browser and input
the name of our module because this is the name of the program. The
output is, oh wonder:
Hello ...
As the next step we extend our project somewhat. Therefor we add two
methods:
Listing 3
String& String::operator+= (const String& rhs) {
char* buf = new char[length()];
for (int i=0; i<length(); i++)
buf[i] = data()[i];
for (int i=0; i<rhs.length(); i++)
buf[length()+i] = rhs.data()[i];
delete _data;
_data = buf;
return *this;
}
int String::length() const {
return strlen(data());
}
We modify the main function as follows:
Listing 4
main() {
String str ("Hello");
str += String (" from WipeOut.");
cout
After recompiling and running the program we now get a 'Segmentation fault'
and we have no idea why. This is the right time to use the debugger.
The debugger
[INLINE]
We start the debugger with the fourth button of the Project Browser. The interf
ace
consists of similar elements as the Class Browser: the browserbox and the
panelbox. The panelbox has five different subpanels:
* Breakpoints : shows all breakpoints
* Stack : shows the current execution stack
* Sources : shows all source related to the program
* Expression : shows a special variable expression
* Threads : used when debugging threads
We arrange the panelbox for our needs: we add the 'Sources'-panel to the
two default panels 'Breakpoints' and 'Stack' by clicking on the small
'plus'-button.
Now we load the program with Session->Load Executable. After that
the 'Source'-panel should show all source files related to the program.
The browserbox is used to show variable values. The first listbox shows per
default all local variables. If you click on an item, the next level shows
the value or the components of it. This way you can easy browse classes,
structures and arrays. After every action the debugger refreshes the
variable values automaticly.
If you like to inspect a variable, which is not shown in the browserbox at
the moment, just mark it in the editor and choose Inspect Variable
in the toolbar of the debugger. Now the first list of the browserbox
contains the variable.
Normally the debugger resolves data structures automaticly, so that you
always get the correct values, e.g. it shows the content of a pointer and
not the pointer value itself. But sometimes this is not possible, e.g. if
you declared an array of integer-pointer as 'int**'. To get the
items of the array you have to cast the type with Inspect->Change/Cast
Variable.
With Inspect->Move Variable to Top it is possible to move a variable
from a lower level of the hierarchy to the top, so that you don't need to
browse through the whole hierarchy to get the value of the variable.
Because we've got no idea what's wrong with our program, we simply start it
with Run->Run. This causes a 'Segmentation fault' again, but after
we have confirmed the messagebox the editor colors the error line with red.
We've got an error in the method String::operator+=.
Before we correct the error we should kill the program with Run->Kill.
To know what's going on in the method we want to find out what the
method data does. We use the Class Browser as cross-referencer to find
the implementation of this method. Therefore we mark data and choose
Edit->Search Symbol in the Class Browser. Now the editor shows the source
code of the method data. We see, that the method doesn't do anything
exciting, only returning the _data-pointer. We go back to the
String::operator+= method, take a closer look on it and recognize
that we've allocated not enough memory for buf. So we modify the
first line as follows:
char* buf = new char[length() + rsh.length()];
We compile and run the program again and everything is fine. But we can't
see the output. We resolve the problem with Inspect->Program Console ....
This opens a small console and after starting the program again, we see
the output:
Hello from WipeOut.
The debugger has of course a lot of more features than explained above. Besides
of Next, Step and setting breakpoints it supports also threads.
It is recommended again to read the documentation about it.
Beside of the Symbol-Retriever and the help browser SurfBoard now you touched a
ll the
components so that you've got a first impression the way WipeOut works.
Of course we develop WipeOut with WipeOut and we find that it very
increases our productivity and makes it easier to program than just using
a simple text editor. Finally some words about extending WipeOut.
[INLINE] Writing own WipeOut components
At this time WipeOut contains only the most important components of our
opinion. There may be a lot of other possibilities, e.g. many people may
wish a GUI-builder. We can't and we don't want to do all that alone. So we
have created the WDK - the WipeOut Development Kit, which allows you
to develop your own components. This interface gives you access to
important functions of WipeOut, e.g. showing a file in the editor or
adding a file to a module.
If you like to program such components you only need to download the
WDK-package from [4]. Apart from the documentation the package contains
DmPack2, socket++, the wxXt header files, a simple
example component and SpellMaster - a frontend for ispell.
I hope you've got a rare overview about WipeOut and its possibilities. If
you have questions, comments or wishes, write us.
Resources
[1] http://web.ukonline.co.uk/julian.smart/wxwin -
wxWindows/wxXt by Julian Smart and Marcus Holzem
[2] ftp://ftp.virginia.edu - socket++ by Gnanasekaran Swaminathan
[3] http://www.softwarebuero.de/dmpack2-eng.html - DmPack2
[4] http://www.softwarebuero.de/wipeout-eng.html - WipeOut
__________________________________________________________________________
Copyright © 1998, Gerd Mueller
Published in Issue 30 of Linux Gazette, July 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Install New Icons in Caldera's Looking Glass Desktop
By David Nelson
__________________________________________________________________________
Looking Glass, or lg, is a pleasant GUI desktop included in
Caldera's commercial Linux releases (not the lite versions.) However, its
setup procedures and documentation can be very unpleasant. Deciphering
how to add or change icons makes cracking the Enigma code machine look
easy. If you enjoy puzzles and have plenty of time, read /usr/doc/html/Caldera_
Info,
specifically the Desktop User's Guide, Chapters 9 and 11. If you prefer
some help, read on.
I wanted to place an icon on the lg desktop to launch Applixware,
an office suite available from RedHat. To do this I had to create an icon
with a paint program, import it into the lg icon gallery, edit
the "source file for file type definitions," create a new "LG_rulebase
file," and update the lg data directory. Makes a certain commercial
desktop look pretty friendly, what?
Actually, it wasn't as bad as it sounds, and the new icon looks good
and works well. This article will guide you through the process. Here is
your very own free Applixware icon ready to install
in lg; please don't complain about my artistry. You can use the
same process to install any program's icon.
The first step is to create the icon. I tried to use the lg
icon editor but found it crude and prone to crash. Xpaint works well and
is probably already on your system; to be sure, execute the command
locate xpaint
I used Applix Graphics, in part to learn more about Applixware, with final
touch-up in the lg icon editor. Whatever program you use, the
resulting icon should be about 40x40 pixels, stored in either GIF or PPM
format.
The next step is to import the icon. At the top of the lg desktop,
click on Run, then Icon Editor. When the editor opens, click on Galleries,
then System Icon Gallery. When the gallery window opens, click on Icon,
then New. You will see an emphasized (black) area with a blank icon picture,
probably labeled icon1. At the top of the gallery window, click on Icon,
then Import. A file window opens. Navigate to where your GIF or PPM file
is. Click on the file, then click on load in the file window. If the icon
is just the right size, it will import directly into the emphasized area
in the system gallery. If not, a window will appear that contains your
icon. If part is cut off, drag on the lower right corner to enlarge the
window and show your whole icon. (I'm assuming your icon ended up somewhat
bigger that 40x40.) Click the radio button "Scale," then "Filter on Scale."
This latter button smooths the image as you resize it. You should see a
little box at the upper left of your icon picture. Drag the corners to
cover your icon. Your final icon now appears in a smaller box at the upper
right of the window. Click Apply; the gallery window puts your icon into
the blank icon picture and changes the name to that of your icon file.
To give the icon the right name, click on Icon in the gallery window,
then Rename. In the New Name box, type APPLIX_PRG and click on OK. If you
want to do some final "fat bits" touch-up, click Icon, Edit, and have at
it. I suggest that you save your work frequently, because the editor crashed
on me. Don't bother editing the mask. It gives a 3D appearance to a selected
icon, and the default mask is good enough. When done with the editor, click
File, Close, and your final icon appears in the gallery. In the gallery
window, click File, Close, and say yes to save your work. One last warning.
Even though the icon editor lets you export the completed icon for other
purposes, this feature seemed broken. All graphics programs I tried complained
that the exported icon file was unreadable. (Did I mention that the
lg editor seems to have problems?)
Now you have to tell lg how to use the icon. Change directory
to /usr/visix/lg/default/lg_ftc. Open prog.loc.ftc in
your favorite editor. This source file defines local file types and their
associated icons. Insert the following text at the beginning, after the
two "include" lines:
DEFINE TYPE Applix
ICON APPLIX_PRG
FILE_DESCRIPTION "Applix desktop suite program"
BINARY_EXECUTABLE
AND NAME "applix"
INHERIT_COMMANDS BinExNativeClass
END
No, I don't know what it all means. I adapted it from other program entries.
But, hey, it works, and most of it is obvious. If your icon is for a different
program, edit accordingly. Save the file. At the command line in the same
directory, type make all and make install. Quit and restart
the lg desktop.
We're almost done. In the lg window click on Windows, then
Open Directory. Navigate to opt/applix (or wherever applix
is stored) and you should see your beautiful icon designating your program.
Drag the icon out of the directory window and onto the desktop window.
Park it in an aesthetically pleasing place. Launch your program by doubleclicki
ng
your new icon. Congratulations. Doesn't this make you want to read the
rest of the lg documentation? Actually, you might want to learn
about file associations and other wonders of lg. Then you can
write an article for lg (that's Linux Gazette here) telling the rest of
us how you did it.
__________________________________________________________________________
Copyright © 1998, David Nelson
Published in Issue 30 of Linux Gazette, July 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Installing Microsoft & Linux
By Manish P. Pagey
__________________________________________________________________________
This is a story about my struggles setting up a new laptop computer to
boot two different operating systems. And how I discovered the extent to
which Microsoft and IE4 are lacking. Hopefully, someone will learn from this
experience and think twice before installing IE4 on there machine.
The first operating system that I wanted to install on the machine was
Linux (a free, UNIX like operating system which can teach Microsoft a
million or two things about what a stable operating system is supposed
to be like). I booted the computer using the Linux boot disks, inserted
the Linux CD-ROM into the CD drive and finished the installation in less
than thirty minutes. Everything was up and running including the network
using a PCMCIA network card. Linux comes with a program called LILO
which allows one to decide which operating system to boot when the
system is powered up. This was also installed without any problems.
The next task was to install Windows 95 on another partition of the same
disk. That is where my nightmare began. (Of course, you may ask why I
wanted to do this in the first place. Because I am stupid, thats why).
The developers at Microsoft have no regard for other operating systems
and have been living in their shells for so long that they could not
imagine having two operating systems on the same computer. In any case,
after booting from the Windows 95 setup disk, the setup program kept
insisting on destroying all partitions from the disk before installing
the "operating system". It gave me only two choices: Let it partition
the disk again or Exit setup.
My first choice was to exit setup and try to trick it into installing
Win95 on a DOS partition that was already present. So, I went to the
"A:>" prompt (LOL) and fired up fdisk. I could see the DOS partition and
hence I could format the "C:" drive. I was hoping that after I format
the "C:" drive and then try installing Win95 from "Disk 1" instead of
the "Setup Disk", everything will work fine. So, I formatted the C:
drive and started the "setup" program from "Disk 1". Everything seemed
to work fine till the third disk and once again the setup program
refused to proceed; this time because of a similar reason which I do not
recall.
I was kinda stuck at this point because if I let the Win95 setup program
to repartition the disk, it will gobble up the whole disk and would not
leave any space for the second operating system. The other option was to
use the DOS fdisk utility to destroy all partitions on the disk and
create a new partition for installing Win95 and install Win95 before
installing Linux. That is the path I took.
So, I destroyed my perfectly working Linux partition and installation
and created a new partition to install Win95. This time, the setup
program worked without any problems and installed the Win95 operating
system on the first partition on the disk. In a few minutes after that I
had Linux running once again on the second partition and reinstalled
LILO to choose the operating system during startup.
As before, I had no trouble getting the network up and running on the
Linux OS. So, I decided to setup the networking on the Win95 side. Guess
what, the driver that Win95 installed to access the PCMCIA cards was not
working properly. I had to try different drivers (and reboot the machine
every time I selected a new driver) and get the correct one by trial and
error. (I did the obvious things such as look up the documentation for
the computer and install the driver corresponding to the documentation,
but that did not work. I had to use a driver that conflicted with the
documentation in order for Win95 to access the PCMCIA cards correctly.
On the other hand, the driver that Linux was using was consistent with
the documentation). Finally, after a long struggle and several million
reboots, I got Win95 to see my PCMCIA cards. Linux came with the driver
for the Ethernet card that I was using but Win95 had to use the floppy
disk provided by the manufacturer (and they say that Win95 supports more
hardware).
I have been exposed to all this hype about IE4.0 and such. So, I decided
that instead of using the good old Netscape Communicator, I will give
IE4.0 a test drive. (Once again, you may ask why I would do such a
stupid thing. Now that I have gone through this torture that I am
describing, I must say that I will never attempt to give a Microsoft
product a test drive just because Microsoft says its good. What was I
thinking ?). I have a fast connection to the Internet and hence, the
obvious way to install IE4.0 was to download it from the Microsoft home
page. You would love what happened next.
My local network is behind a firewall. In order to access the Internet,
we need to use SOCKS proxy service provided by the local gateway
machine. This is not something that is very uncommon in the present
corporate networks (in fact, this might even be the most common
configuration). Coming back to my attempt at installing IE4.0, I clicked
on "The Internet" icon sitting on the desktop and went through the
process of setting up the network properties for the machine. After all
the setup was done, I was hoping for it to bring up a browser window for
me. But I realized that the first time you click on this program, it
only performs the setup. You have to run it again to start the browser.
I am not sure why it was set up this way, but I will ignore this for the
time being as there are more important things for me to complain about.
After bringing up this ancient version of Internet Explorer, I wanted to
setup the address of the proxy server so that I could access the
Internet and go to Microsoft's home page. Aha !! The Internet Explorer
that was packaged with my version of Win95 does not understand proxies.
This meant that sitting there I had no way to access the Internet
through my proxy server. I knew that Netscape could do this. So the only
way to get IE4.0 on my machine was to install Netscape first !!!!! Even
getting Netscape was not easy from within Win95. I had to reboot the
machine into Linux. Since Linux came with client programs to access
Socks proxy servers, I could get to the Netscape FTP site and download
the Communicator for Win95. I rebooted the machine into Win95 and
installed Netscape without any problem. I set the preferences for
Netscape so that it knew about my proxy server and everything was
running fine as far as accessing the Internet is concerned.
I used Netscape to download the "ie4setup" file from the Microsoft home
page and fired it up. I will give you one guess to tell me if it worked.
You are right !!! It did not even come close to working. The ie4setup
file does nothing more than connecting to another server and downloading
a bunch of files that are required to install IE4.0. Since I am behind a
firewall, it could not find the server. It would be fine if it returned
back in a few seconds and told me that it could not find the server. But
that would be the right thing to do and Microsoft just cannot do any
such thing. Instead, the ie4setup program made me glare at a rotating
globe for fifteen minutes before giving up the search for the server.
After not finding the server, the programmer had half a brain cell to
ask the user for the address of a proxy server. However, this feature of
the setup program does not support SOCKS proxy (I tried putting the
address of my proxy server but it did not work). Thanks to the people at
NEC not all was lost yet.
I remembered reading about the program SocksCap32 which allows Win95
programs to access the Internet though a SOCKS proxy server. So I fired
up Netscape again and downloaded/installed SocksCap32. After starting
ie4setup through SocksCap32, it could access the servers and started
downloading the rest of the files that are necessary to install IE4.0.
Just before starting to download these files, it gave me an option of
either saving these files on disk or directly installing IE4.0. I had
little patience left at this time, so I chose the latter. The ie4setup
downloaded all the files correctly and started the installation process.
The installation process continued correctly until about 75% of
installation was complete. At this point, I had to leave the computer
and go away for several hours. I was hoping that when I come back, this
installation will be over. (I am sure you are laughing at me right now).
I came back after about three hours and the installation process had
reached 78% !!!!!!! I waited for a few minutes to see if it was doing
anything. There was no disk activity and hence I concluded that the
program had crashed or hung up. So I clicked on the "Cancel" button to
stop the installation. It came up with a window which said that the
"cancellation" process will take several minutes and that I should not
reboot the machine because that might leave the machine in an
inconsistent state (whatever that means). So I waited for it to finish
the job. There was no disk activity for half an hour which is also when
my patience ran out. I rebooted the machine. When it came up in Win95,
it had installed IE4.0 but not many of its components. I was not sure
what was going on but soon realized that since the ie4setup was run
under SocksCap32, it must have started the rest of the setup under
SocksCap32 too. And, knowing Microsoft, it may not have been designed to
work under the SocksCap32 libraries.
This meant that I should have stored the files downloaded by ie4setup on
the disk and started the setup without using SocksCap32. So, I fired up
ie4setup through SocksCap32 once again and downloaded all the files to
my disk. After that, I started the setup program from these downloaded
files and IE4.0 was installed on the machine without any more problems
in just a few minutes. Whew.
Great. Now that I have IE4.0 and Outlook Express 98 installed on my
machine, I should start using them. I started up IE4.0 and set it up to
use the proxy server. It worked just fine and I could access the
Internet. So far so good. Now, I needed to setup my mail account. So, I
clicked on the "Mail" button which started up Outlook Express. It asked
me for my email address, mail server name etc. in order to setup the
mail account. After that, I tried to check for new mail. And nothing. It
brought up a window in which it displayed a message that it was trying
to connect to my mail server but stopped in a minute with an error
saying that the connection to the server had failed !! My POP3 mail
server is outside the local network. Which means that one has to get to
it through the SOCKS server. Netscape has no problem doing this but at
this point, I have not found any way to setup Outlook Express to do
this. And this is when I decided to give up completely on IE4.0/Outlook
Express/Win95. I am back to using good old reliable Netscape.
I am not sure if anyone in the Linux community will benefit from this but I am
sure some of the people "on the other side" can learn something from it.
--Pagey
__________________________________________________________________________
Copyright © 1998, Manish P. Pagey
Published in Issue 30 of Linux Gazette, July 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Linux Expo a Smashing Success!
By Norman M. Jacobowitz
__________________________________________________________________________
For three days of May (28, 29, 30), the normally tranquil Duke University
Campus was transformed into a raucous playground for geeks and hackers
as the Fourth Annual Linux Expo was held at Duke's Bryan Center.
By all accounts, this year's Expo was a smashing success. Red
Hat's Marketing Director, Lisa Sullivan, deserves special thanks for organizing
and directing the event. Many others, from Key Note Speaker Linus
Torvalds to the blue-shirted Duke University catering staff, were instrumental
in making it a memorable three days.
According to Sullivan, approximately 1500 visitors were registered as
paid attendees, while another 350 to 500 were registered as speakers, VIPs
or other gratis attendees. Attendees ranged from as far away as
Korea, Finland, Colombia and Alaska. Some 34 exhibitors
showed their products and services.
Some of the speakers included:
* Eric S. Raymond gave an inspired, scholarly overview of hacker
motivation in his ``Homesteading the Noosphere'' speech.
* Miguel de Icaza, despite troubles with the overhead projector,
shared much about the technical details and future features of
``GNOME, The GNU Network Object Model Environment'' GUI.
* Mark Mathews described his success as a consultant and Linux
programmer in his talk, ``Developing Linux Software for Fun--Turns
into Profit''.
* Jon ``maddog'' Hall described his encounters with Linux users
worldwide during ``Linux Around the World''.
Some exhibitors included:
* Corel Computer Corporation displayed their new Linux-based
NetWinder Network Computer.
* Digital Equipment Corporation exhibited their latest generation
Alpha processors.
* Linux Hardware Solutions showed off some of their line of, well,
Linux hardware solutions.
* Caldera, Red Hat and Turbo Linux were there presenting their
latest Linux distributions.
Of course, the single most popular event was Friday evening's keynote
address by Linus Torvalds. An estimated 1000 to 1200 folks were on
hand. In his typically unpretentious, casual and brutally honest
style, Linus filled us in on his future vision for the Linux kernel.
Linus first took a moment to thank everyone who has helped him with
the stable kernel releases, especially Alan Cox. Linus went
on to say he is happy with the way Linux is going, especially with
the way new markets are opening up and new applications are being
made available.
Here are some highlights of Linus's views on important topics for the
future of the Linux Kernel:
* The 2.2 release: look for a code freeze in about a month with the
next stable release, Kernel 2.2, to follow as soon as late July or
early August.
* SMP: Symmetrical Multi-Processing is currently one of Linus's
favorite features of the kernel; expect continued development and
enhancement of SMP in future releases.
* Merced: Linus is not particularly impressed with or concerned
about Intel's upcoming 64-bit processor, code-named Merced--he
actually prefers DEC's Alpha architecture. He did say porting
Linux to Merced should be no problem once GCC is optimized for
Merced.
* Java: While Linus would like to see an officially supported Java
Development Kit from Sun, he is still not impressed with Java and
would prefer to stay out of the Microsoft/Sun clash over Java
purity.
* Emulation: Linus would prefer to see native Linux applications and
does not like the idea of emulating other operating systems for
the purpose of running applications.
Of course, Linus had much more to say, but the gist of his speech was
that with more time and some more good luck, Linux will continue to move
towards complete world domination.
Judging from the air of excitement and the buzz of optimism pervading
this year's Linux Expo, Linus is exactly right.
__________________________________________________________________________
Copyright © 1998, Norman M. Jacobowitz
Published in Issue 30 of Linux Gazette, July 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Linux Expo Editor Wars!
By Eric S. Raymond
__________________________________________________________________________
Thursday night's epic paintball tournament was easily one of Linux
Expo's most eagerly anticipated and talked-about events. The theme
was ``Emacs versus vi and may the best editor win!'' At the appointed
time, the would-be warriors trooped off to a patch of woods south
of Durham and donned team T-shirts donated by O'Reilly & Associates. By
happy coincidence, the 61 fighters split as exactly as possible
down the middle, 30 on the Emacs team and 31 on vi's.
Ominously, however, all three of the experienced paintballers in the
crowd elected to fight for vi.
As we waited for mysterious rituals to complete in the paintball shed,
there was much humorous analogizing--vi fans claiming that Emacs's
guns ought to take forever to load, countered by Emacs partisans opining
that vi fighters should be unable to move and fire at the same time.
``You shall feel the power of the Lisp side of the Force!'' declaimed
one black-masked Emacs fan a la Darth Vader, met by hoots of derision
and yells of ``vi rules!''
Additional humor was provided by the boss paintball referee, who
understood neither our theological disputes nor the lemur and gnu
emblems on our team shirts. He gave up early and started referring to
the teams as ``monkeys'' and ``cows'', much to the amusement of both sides.
Eventually, not too long after the official start time, we listened to
a safety lecture, picked up our guns, face masks and
glycerin-capsule ammunition and marched into the woods. Each team
got a fortified fire base; the game was elimination, with the last man
standing winning for his team.
Telling friend from foe turned out to be a bit of a problem, as both
teams were wearing white T-shirts with black emblems and the colored
arm bands we'd been issued were not really conspicuous--some truly
valiant hackers were hit by friendly fire. There were heroic
charges and stealthy ambushes, sniping duels and stand-up fights. The
paintballs flew thick and fast, and the woods resounded with cries of
``Out! Out!'' as pigment-splotched casualties exited the field, guns
held over their heads.
The teams' combat styles were allegorically perfect. The vi guys were
fast, aggressive and sloppy; the Emacs team was slow, tried to think
things out and play tactically. Result? The vi guys waxed the Emacs
team, winning three out of four games. Evidently (as many on both
sides later agreed, amid much laughter) paintball rewards different
virtues than programming.
The event was a success, and general kudos went to Mike Maher of Red Hat
from whose brilliant and obviously twisted mind the concept originally
sprang. Next year perhaps we'll tackle Perl vs. Python or Red Hat
vs. Every Other Distribution or some other chronic flame war--and,
hopefully, get different-colored shirts so we can tell each other
apart!
__________________________________________________________________________
Copyright © 1998, Eric S. Raymond
Published in Issue 30 of Linux Gazette, July 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
The Fourth Annual Linux Expo
By David Penland
__________________________________________________________________________
Photo Album
__________________________________________________________________________
This year, Red Hat Software decided to hold the fourth annual Linux Exp
o
at Duke University's Bryan Center in Durham, North Carolina. The event was
scheduled over three days from April twenty-eighth to the thirtieth. In
addition to the normal vendor displays and conference, the Linux Expo web
site promised such diverse attractions as a quake fest and a paintball
tournament. I arrived at the Center at seven-thirty on Thursday to find
over one hundred people already ahead of me in line. Registration wasn't
until eight o'clock. Apparently I was not the only Linux fanatic champing
at the bit.
The doors did not actually open until a little past eight, and I did no
t
get in to register until about eight forty. As a pre-registered attendee, I
received a Linux Expo tote bag bearing the Expo logo, as well as logos of
Expo sponsors. Inside I found a bound copy of the proceedings, a VAResearch
tee shirt, a Red Hat cap, an issue of SysAdmin, and a Caldera flashlight, as
well as flyers advertising specials at Expo vendor booths.
Prominently placed in front of the entrance was the Red Hat booth. Thei
r
booth featured the new Red Hat Linux 5.1, due to be released the following
Monday. Also on the upper floor was the Caldera, Linux Hardware solutions,
Linux International, Solid, and RHAD Labs booths, as well as the Expo store,
and the Softpro Bookstore.
Because of registration delays, the tutorials and technical conference
fell thirty minutes behind schedule, and remained out of sync with the business
track for the rest of the day. The Extreme Linux tutorial was kicked off by
Mad Dog Hall, who explained the name Extreme Linux, and the snow
boarding penguin logo. Basically, Extreme Linux is Linux with an attitude
Although Mad Dog said that the project's founders do not want to tie.
the commodity cluster idea to a single operating system, he urged people
to use the name Extreme Linux when referring to clusters of Linux machines.
After Mad Dog finished, Peter Beckman explained how Extreme Linux
cluste rs were used at Los Alamos' Advanced Computing Labs. Several members of
his team talked about their experiences with the system, and the problems
they had solved. The talk featured the Linux Expo cluster, a four node cluster
set up especially for the show. The cluster consisted of 4 dual 333
MHz Pentium two's, each with 256 megabytes of ram and a four gigabyte disk
drive. The cluster was tied together with a Myrinet network. After putting the
cluster through its paces with modeling programs, Beckman decide
to bring out a "practical application", the Extreme Linux monster truck.
Although it had been a radio shack remote control toy in a previous
life, the monster truck had undergone an "Extreme" transformation. The body had
been removed, and the truck's circuit board hacked. For vision, the monster
truck had a Connectix quickcam with a custom mount to allow panning.
Mounted on top was a Toshiba Libretto with a wireless Ethernet connection
to the cluster. An operator sat at the console of the cluster, controlling
the truck as it cruised across the floor observing the crowd with
its quickcam. The operator's console was projected on a screen, and the crowd
could see themselves from the truck's point of view thanks to the quickcam.
Beckman assured us that the truck had a practical use, pulling network
cables under the raised floor at Los Alamos. Without a doubt, the truck
stole the show. For more information see http://www.Extremelinux.org/.
After the tutorial, I decided to make my way to the vendor area on the
lower level. Strategically placed at the entrance to the vendor area was Cobalt
Microserver Inc. They were showing the inexpensive Cobalt Qube microserver, a
blue 7.25"x7.25"x7.75" cube with powerful intranet server capabilities. This
little box will be near the top of every Linux geek's Christmas list.
Inside the door I found Stay online, a retailer of inexpensively priced
computer components. The vendor area was so jammed with Linux enthusiasts
that I had a hard time getting to every booth. Linux Mall was once again
on hand offering great deals on everything. I picked up Red Hat Linux 5.
1 for twenty-five dollars and Star Office Commercial for fifty dollars. Sun
Microsystems was a very noticeable new addition to the Expo this year,
showing off complete Ultrasparc computers as well as Ultrasparc based
motherboards for building your own homebrew ultrapenguin machine. Alta
Technology and Paralogic, two vendors of pre-built Extreme Linux clusters
were also present. At another entrance, Jim Paradis of Digital Equipment
Corporation entertained a mass of power hungry linuxers with a new smp alpha
machine.
Cobalt wasn't the only company with miniature gee-whiz computers.
CorelComputer was showing off their soon-to-be released Netwinder computers.
These little boxes (9.5"x6"x2") have everything you could want in an
intranet/internet client, and can be used as web servers as well. The
Netwinder could be serious competition for the Qube, but I think many customers
might choose a mixed environment of both.
Another major attraction was the RHAD Labs booth, which featured a
couple of computers running gnome. The booth was staffed by members of the RHAD
Labs development team, and Miguel de Icaza made occasional appearances.
At just about any point in time, people were lined up three deep to get
a look and gnome and ask the developers questions. One of the gnome computers
had a camera attached to it, and some interesting pictures from the Expo
have been posted on http://www.gnome.org/.
Toward the end of the second day of the Expo, I got an unexpected
surprise which made the show immensely better than I had expected. While
looking through the popular tee shirts offered by Xunilung, I overheard
someone proclaiming that Linux was a misnomer, and that the correct name of the
system was Gnu-Linux. This was a position I had heard before. I stepped
back from the tee shirts to peek around people who had gathered around a table
placed perpendicularly to Xunilung's. Sure enough, the gnu-linux admonishment
was coming from Richard Stallman. For those who are not familiar with rms,
as Stallman is often called, he is the person who started the gnu project in
1983 to provide a free version of Unix for anyone who wanted it, unencumbered
by proprietary licensing restrictions. Stallman is responsible for the Free
Software foundation, and the general public license.
Although I do not really agree with him about the naming of Linux, I
firmly believe Linux could not have been developed without the tools provide
d by the FSF. Stallman has been a hero of mine since before Linus discovered
Minix, so I was somewhat speechless when I saw him there unannounced.
I stood back and watched for a while as young hackers got autographs and
bought gnu tee shirts, CD-ROMs, and books. Occasionally Stallman would place
the platter from an old disk pack on his head. With this "halo" in place, he
became Saint Richard, patron saint of the Church of Emacs, and he would bless
the young hacker's computers provided they did not have any proprietary
software on them. When it was my turn to talk to Saint Richard, I thanked
him for the work he had done, and bought two Emacs books. He signed the books
happy hacking, and happier hacking, Richard Stallman.
After my encounter with rms on the second day of the Expo, I found my
way to the auditorium where Linus would be giving the keynote speech. I
was lucky, I found a seat about fifteen rows back from the stage. Less
fortunate fans continued to file in for another fifteen minutes, and by the
time Linus got on stage, people were standing and sitting in the aisles. An
overhead projector indicated the theme of Linus' talk, titled Ramblin'
Linus. Linus took the microphone and said "I'm Linus, and I am your god",
at which point the crowd responded with deafening applause. Linus thanked
various people for their work, in particular Alan Cox who has taken over
the normally thankless job of maintaining the stable kernel for the last
year or so. Some of the topics covered were the current state of the
development kernel, the upcoming release of the 2.2 kernel, and future
directions of kernel development. Linus spent about twenty minutes answering
questions from the audience, and then everyone filed out for a southern
style barbecue dinner in the university yard.
Conference talks were the main focus of the Expo for me. Unfortunately
there were so many talks offered, I had a hard time making up my mind about
which ones to attend. Extreme Linux is the only tutorial I made it to,
but there were eleven more, on subjects as diverse as programming with gtk+,
Python, hacking the Linux kernel, LinuxConf, and a demonstration of the Coda
filesystem.
The conference was broken up into a business track and a technical
track. The technical track auditorium was where I spent most of my time, but
I did make it to several interesting business talks. Robert Hart of Red
Hat Software gave a talk on linux certification dealing with what certification
meant, and who should try to get it. He also encouraged the audience to
drop off resumes at the Red Hat booth, which I did. I am still wait
ing on your call Robert. Mad Dog gave an anecdotal talk on how Linux is
used around the world, and Tim Bird of Caldera filled us in on the COAS
project. COAS is a project to develop an integrated administration tool for
Linux and possibly other unices, they are looking for volunteers, so drop
them a line. The last talk in the business track was actually a panel which
discussed free software licensing. The panel consisted of Eric Raymond,
Richard Stallman, and Bruce Perens, who moderated. Raymond's and Stallman's
views were not exactly in sync, so some very interesting discussion concerning
the state of free or open source software licensing took place.
The technical track started earlier, and ran longer than the business
track all three days. Unfortunately, registration problems, and technical
difficulties threw the schedule off the first two days, and technical talk
s were out of sync with business talks which made it hard to move freely
between tracks. David Miller gave a very technical talk on optimizing the
Cobalt Microserver. Peter Braam of Carnegie Mellon University gave two
informative talks on the new VFS interface, and the Coda distributed files
system. The Coda team has made a lot of progress, and the filesystem is so
mething worth looking into. Peter also mentioned that the team is looking
for a good system programmer who likes interesting work, but doesn't mind
being poor.
Bruce Perens and Daryll Strauss both gave talks on the use of computers
to make movies. Strauss showed us how a pile of alphas running Linux help
ed with the making of Titanic. During a short video presentation, he pointed
out some amazing effects that were computer generated. Bruce went over
some basics of computer animation in Toy Story, and showed an experimental
piece by Pixar called Gerry's Game. The auditorium was packed for both talks.
Miguel de Icaza discussed the gnome project to a very large crowd. Due
to technical problems with his laptop, the talk ran over by about thirty
minutes. Fortunately, Miguel is a very entertaining speaker, and he kept
the audience's attention while half of the RHAD Labs team and a concerned
member of the audience fretted over his computer. Lars Wirzenius presented
his Linux Anecdotes, a history of the linux system from someone who was
right there when it was created. Lars shared an office with Linus at the
University of Helsinki, and was the first person to actually run Linux
on his computer. Alan Cox, a fixture at Linux Expo, gave a talk about the
trials and tribulations of porting Linux to the Apple Macintosh 68K. His
talk was titled "I don't care if space aliens ate my mouse". The title
comes from an old Apple document, apparently the only official document
ever written on the apple mouse.
These were only a few of the talks given at the Expo, a complete list
can be found on the Linux Expo web site: http://www.linuxexpo.org . In
addition to vendors and talks, there were other things to keep Expo attendees
busy. A quake fest ran all day every day on the lower level, with deathmatches
every fifteen minutes. Prizes were awarded for the highest body count from
each match up. Birds of a Feather sessions were offered throughout
the three days on a variety of topics, and an "email garden" was set
up to allow attendees to get access to the net for checking their email.
On Thursday, the age old question of which editor, Emacs or vi, is superior
was finally answered. Obviously, the only way to resolve the issue was
through brute force, so the Expo hosted Editor Wars, a paintball tourname
nt. When the CO2 propelled paint mist settled, the vi team emerged from the
field victorious.
Wrapping up the show Saturday evening was the second annual Linux Bowl.
Mad Dog was the host, and the teams consisted of conference speakers and
audience participants. Rasterman, of RHAD Labs, and audience members were
the judges. Bruce Perens and Eric Raymond were two of the contestants.
Some of the questions asked were: what lilo option is used to list currently
mapped files(answer: -q), what was the first kernel tar.gz to exceed
ten megabytes ( to which Bruce Perens promptly replied Microsoft NT. The
correct answer was 2.1.88), which movie featured the Red Hat Office building
(one contestant replied Debbie does Durham, and Mad Dog felt compelled
to award one point. The correct answer was Kiss the Girls), why was the
Beowulf project named Beowulf( answer: it sounded cool), and a trick question,
what was the first system to run UNIX ( answer: a pdp7).
The Fourth Annual Linux Expo was a tremendous success, and I think ever
y
one went home happy. The show organizers deserve a big round of applause
for their efforts, and if this year's turn out is any indication of things
to come, they had better get a bigger building next year.
__________________________________________________________________________
Copyright © 1998, David Penland
Published in Issue 30 of Linux Gazette, July 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
LinuxCAD Impressions
By Robert Wuest
__________________________________________________________________________
Friday, June 12
I started calling Software Forge first thing this morning. NT had crashed
yesterday and hosed it's own installation. I had no rescue disk. The day was
starting real bad because I had an instrument design project getting behind. I
needed to work on the mechanical layout. I'm sick of problems with microsoft
operating system products.
I've been using AutoCAD for over 14 years and have seen it turn into a fairly
decent CAD package. I use R13 and have used everything back to around 2.0.
Getting It
Boot into Linux. It always works. There's a
reasonably priced cad package I'd seen news posting after news posting
advertising itself. Supposed to be like AutoCAD. I'm just going to breakdown
and buy it and do the design in that. Start Netscape and head to the cola
archives. It's moved it's home, so I change the bookmark, search for
LinuxCAD (the
archive) and find Software Forge's home, http://www.linuxcad.com. All it has i
s an E-mail address and a phone number,
(847) 891-5971, in Chicago, Illinois.
Screen-shots are there, check them out. I guess, from the numerous copies
of the ad I had seen, I was expecting something that acted like AutoCAD.
Notice the first window shows the columbia drawing, columbia.dwg,
and the second
window, that it's title is "AvtoCAD-SoftwareForge".
http://www.softwareforge.com/linuxcad/pricing.html says this:
"LinuxCAD is a true open software product and as such it has been
ported to all major UNIX platforms. The pricing of LinuxCAD for
platforms other than Intel depends from the number of copies you
have chosen to purchase , the more copies the lesser price. All
ports retain full original functionality and are fully compatible
with original LinuxCAD for Linux for Itnel and with AutoCAD".
"True open software product"? Where's the source? The license is in no way ope
n
and the source is no where to be found. Meaningless buzzwords.
I call and call all day: someone finally answers the phone about mid afternoon
and sells me a copy. I'm a little confused as to who was serving who after
that conversation, but I did manage to buy a copy and download it off of their
ftp site with no problem. And she told me that there have been over 100 copies
sold. So now I've forked out the $$$ for this thing and what follows is what
I experienced.
The readme file (on the ftp server) said to put the archive in the directory
where you want to install it and untar it. Enough for those who know their way
around Linux pretty well. I go to the file with TkDesk and my pop-up menu isn'
t
right for the file :-0. This file is unconventionally named slk96_tar.gz, not
slk96.tar.gz like it should be, so I promptly renamed it.
Examining the contents, I see a straight collection of 25 files, no
directories, no man pages, info pages or html docs. There are several .dxs,
.mnu, .scr and .txt files. I make a directory linuxcad, moved the archive in
there and run extract off the tkdesk pop up.
The readme file also has this bombshell:
" Optional LinuxCAD extensions
================================
1) Print option:
Hardcopy to DeskJet , LaserJet
and to MSWindows based LinuxCAD printserver
---------------------
$100
2) Plot option:
To HP-GL compatible plotters
---------------------
$100
3) DXF Import option
---------------------
$100
4) Customization option:
Includes
4.1 Hot keys menu and user programmable pull down
menu.
4.2 GNU C/C++ programming interface.
---------------------
$200
5) 3D design option
---------------------
$200"
This is not a $75 package, it's a $775 package.
First Impressions
Double clicking (back in TkDesk) the executable I saw that it ran, produced no
window and exited with status 0. It had spit out out an error message, which
came out on my login vc:
LinuxCAD v 1.53
Portable Computer Aided Design program for Linux and Unix.
Usage:
linuxcad
But I didn't see that until I started an xterm and ran it from there. So I
gave it a filename this time and it ran. Here's the command I used:
$ ./linuxcad test.dxs
It puts itself in the background. I exited immediately and it gave me a dialog
asking if I wanted to save my changes. What changes? I had just started and
exited. I'm not even going to read the docs. Just see what I can learn by
fiddling with it a bit. I can immediately see that this needs polish.
$ ./linuxcad test.dxs # again.
The "line" command worked, but "l" didn't. "Move" worked and "w" selected enti
ties
by window, but the "m" command didn't work. OK, it doesn't have command
aliases. Oh, they're $200.
I draw a few lines. "line", click, click, . Oops, still drawing the line.
OK, right click. The line is placed, but in the text area, all that's there is
:
Command aborted !
Command:
There are no scroll bars and no handle to resize the command area. There are 3
lines of command area stretching across the bottom of the window. I can scroll
back by using X-selection, but that doesn't give very good control.
Line editing is very poor. My arrow keys do nothing, The backspace key
works, but , , any extended key does nothing. I start a
command and can't find a key that cancels it. The right mouse button will, but
why doesn't , or ^-c work. Let's try all the keys
:-). ^-z, ^-x, ^-c, etc. ^-j causes a "point expected !" message. After a
lot of keys, it crashes. Looks like a buffer overrun to me. Restart. It
crashes every time and work is lost. No core dump or error to the parent
shell.
^-m opens a command history window with both scroll bars, but I can't type a
command in it. It just beeps at any key except the cursor control keys now
work! The cursor is not visible, but and do what they're
supposed to and and seem to.
That command history window insists on staying over the drawing area. I use
my M- (which I have defined in ~/.fvwmrc to lower a window) and the
window goes away. If I do any window manager operation that brings the window
to the top, the history window ends up over the drawing area. It has an "Exit"
button, so I press it.
There is no coordinate display in the main window. "Pline" doesn't work. "c"
doesn't work to close multiple line segments.
Command:line
From point:0,0
To point:1000,1000
To point:
No way to end it with keys. Have to right click again. Too man unnecessary
linefeeds wasting vertical screen space. No "Zoom" command.
After about an hour of playing:
********** This is not AutoCAD **********
This program does not have an AutoCAD interface, which, based on all of the
comparisons made by SoftwareForge to AutoCAD, it should have. There are
commands to zoom: "zoomw", "zoomall". "Zoom" should use the acad interface.
Other commands do. And there is no equivelant to the "x" option. I do this
often in acad:
z e z .9x
If you don't know acad, that will give a 90% zoom factor scaled to the display
window (everything is visible with a little border anound the outside).
The top of the screen has six menu items and 6 buttons. Draw/line starts line
drawing. The edit menu has no undo. "Undo" doesn't seem to work. Undo is
under "Edit/Edit../Undo/Set mark" and "Edit/Edit../Undo/Undo to last mark" It
looks like one has to set marks and can't just walk backwards undoing actions
one at time.
3D'll run another $200 (item 5). Draw/Draw 3D.../Sphere gets me this message:
"This is an optional feature of LinuxCAD
Please check the readme files to see the current pricing
for the optional features.
Command:"
Drawing area and window display area aren't the same. You must
"Options/Settings/Screen Extents/..." on the menu. This is something I really
don't like right away.
No short commands. Looks like they cost $200 (item 4.1).
Bad command line area with virtually no editing in it.
Changed zoom interface.
No .xyz filters.
Keyboard focus moves to buttons in menu area. You have to click in the
command area after using a button before you can type another command.
"U" doesn't undo the last line segment while drawing lines.
"Undo" requires setting a mark.
No tooltips.
No coords display.
No coords display.
I know AutoCAD very well, but still, I have to read the documentation :-0.
Print only to a bitmap. And who wants to print to a microsoft print server?
Another $100 (item 1) for printing. There is no postscript printing at all.
No cut, copy and paste between multiple instances of the program.
You can't edit with only the keyboard.
Some Techy Details
I'm not sure what toolkit was used. Ldd linuxcad reveals the following on my
system:
libXt.so.6 => /usr/X11R6/lib/libXt.so.6 (0x4000b000)
libX11.so.6 => /usr/X11R6/lib/libX11.so.6 (0x4004d000)
libXext.so.6 => /usr/X11R6/lib/libXext.so.6 (0x400e3000)
libg++.so.27 => /usr/lib/libg++.so.27 (0x400ed000)
libm.so.5 => /lib/libm.so.5 (0x40121000)
libc.so.5 => /lib/libc.so.5 (0x4012a000)
libSM.so.6 => /usr/X11R6/lib/libSM.so.6 (0x401e6000)
libICE.so.6 => /usr/X11R6/lib/libICE.so.6 (0x401ef000)
libstdc++.so.27 => /usr/lib/libstdc++.so.27 (0x40203000)
Searching the executable doesn't help, either. Searching with
$ strings linuxcad | grep -i copy
only finds a couple Software Forge copyright strings (and the word "copy" a
bunch).
Multiple instances run fine.
$ ps -m 3638 3636 # shows the memory usage:
PID TTY MAJFLT MINFLT TRS DRS SIZE SWAP RSS SHRD LIB DT COMMAND
3638 p6 383 195 1080 1920 3000 0 3000 2176 0 206 linuxcad
cab1.dxs
3636 p6 425 200 1140 1976 3116 0 3116 2288 0 207 linuxcad
test.dxs
Startup time is about 2 seconds each on P100/48 Meg. system.
Here's a link to a listing of a blank drawing file, cab1.dxs
Licensing
The license is very restrictive. It's the basic single machine/single user
license. I'm not sure I can even include quotes from the documentation, the
way license.txt is written. It says I can't reproduce or distribute or even
revise the documentation. Does that mean if I removed some of the double
spacing or add notes through out the documentation that I am in violation of
it? In any case, I won't publish the license here. The high points:
* Install on a single computer for "your own individual use". Can my
wife and children use it?
* You can make one copy for archival purposes.
* The program can be transferred. Standard boiler plate. You can't
keep a copy.
* Software Forge is not liable for anything that goes wrong or any
damage that the program does to you, your computer or your mother.
* It says there is some welcome screen with a copyright notice, but
that is wrong; there is no startup message (and it should stay
that way unless it is a window that pops up and then goes away
once the program starts).
* The user's guide specifically calls out that it cannot be released
under the GPL, which I think is kind-of a strange detail to add.
* You can't reverse engineer the program.
* If you violate the license, they'll try and throw you in jail.
Conclusions
First, I should include this quote (double spaced and all) from the
linuxcad.txt document included in the distribution:
" ATTENTION:
This product is still very fresh and is under development , it may
crash from time to time , do save often and please report all crash situations
to Software Forge Inc. by e-mail to: unixguy@aol.com
We add new features quickly and your input about what features you want is
valuable. "
So make a demo available and send an announcement to cola everytime it is
upgraded. Heck, this is the demo version.
So far, only the one crash I wrote about above. But this is not a $775
package. In its current state, it is not a $75 package, even with everything
thrown in.
Upgrades are available for only six months. And this is by no means an
exhaustive list of missing features. Just what I found real quick.
I'd suggest waiting.
I now must get NT working first thing Monday morning. I still don't have a cad
package to do my work in Linux and then take it to acad to make a final drawing
.
I'm still stuck using microsoft. :-(
__________________________________________________________________________
This document is copyright Robert Wuest, PE.
It is herby released into the public domain.
(except those portions copyright Software Forge, Inc.)
_________________________________________________________________
Copyright © 1998, Robert Wuest
Published in Issue 30 of Linux Gazette, July 1998
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
[INLINE]
Book Review: A Methodology for Developing and Deploying Internet & Intranet
Solutions
By Jan Rooijackers
_________________________________________________________________
* Authors: J. Greenberg and J.R. Lakeland
* Publisher: Hewlett Packard Professional Books by Prentice Hall
* E-mail: sales@prenhall.com
* URL: http://www.prenhall.com/
* Price: $39 US
* ISBN: 0-13-209677-3
_________________________________________________________________
The goal of A Methodology for Developing and Deploying Internet &
Intranet Solutions is to be a ``guide'' for project managers. Almost
all situations a project manager can face--from project members to
backup media to making time lines--are described herein. The book
consists of 11 chapters, plus appendices. Everything is written as a
story from the authors, who combined have more than 20 years of
computer experience. Every chapter contains small tips for the project
manager.
In Chapter 1, the reader is introduced to employees of a company that
is used as a study case throughout the book.
In the next chapter a proposal is put forth, and all facets of
handling it from kick-off meeting to support organization to signing
the contract are described. In this book, the project manager makes
use of the WBS (work breakdown structure) model. This model breaks the
project into phases and sub-phases so that each can reach its own
milestone.
Chapter 3 puts the reader into the place of a successful project
manager, who has convinced the ``customer'' to sign the contract. The
customer could be either internal (a department) or external--imagine
yourself as the consultant. This chapter begins with the internal
kick-off meeting. Roles and activities are assigned and given
deadlines, so everyone knows what to do when.
Discussion of the software development cycle begins in Chapter 4 with
writing an approach document. This chapter explains to the project
manager what the document must and must not contain--from requirements
to education. Also, some development methodologies are discussed.
Next, we get to the fun part (only 20 pages)--development. This is
familiar stuff which I face each working day with the
Internet/Intranet. The authors discuss creating HTML pages, internal,
unit and system testing and, last but not least, a checklist to see if
everything is working.
The remaining six chapters (6 to 11) are short, averaging eight pages
each. Implementation is handled in Chapter 6; networking and backup
are discussed in Chapter 7. Chapter 8 covers the various applications
and system testing at a high level, so that you get a complete picture
of how everything fits into the project. The last three chapters are
about putting the project on the user desk. Also, two appendices are
included, the first of which is better: it is technical and briefly
explains the operating system layers and the Internet. While this
information is not presented in great detail, what is here is quite
interesting. The other appendix deals with project management.
The book did not live up to my expectations. Too much of it is written
in the form of a diary or personal anecdotes for my tastes; not enough
is related to actual technical details of the Internet/Intranet. A
Methodology for Developing and Deploying Internet & Intranet Solutions
will bring no added value for persons who have already been working
for some years in the IT area. However, I do think it is a good book
for people who are new to the IT business, and who want to know more
about project management in order to become a project leader.
_________________________________________________________________
Copyright © 1998, Jan Rooijackers
Published in Issue 30 of Linux Gazette, July 1998
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Blackbox title image
The Blackbox Window Manager
By Larry Ayers
_________________________________________________________________
Introduction
Someday (I fantasize) an academic specialty devoted to the taxonomy of
free software will arise, complete with abstruse journals filled with
hair-splitting analyses of the bloodlines, interbreeding, and
evolution of this ephemeral medium. I can imagine a future scholar
publishing a paper in which the various developmental strands of
late-twentieth-century Linux window-managers are analyzed, complete
with photographs of the Sunsite digital archaeology project, conducted
amidst the ruins of ancient Chapel Hill.
Returning to the present, two trends can be distinguished among the
many window-manager projects extant today. The first is either
inspired by and/or descended from Robert Nation's influential fvwm
window-manager. Fvwm2, Afterstep, and (to a lesser extent) WindowMaker
are examples in this category. These window-managers tend towards
extreme configurability and typically are able to load special-purpose
modules such as desk-top pagers, CD-players, and hosts of others.
Configuration of this sort of manager can be a daunting task,
especially for Linux beginners, though the existence of
well-thought-out and esthetically pleasing "themes" (in this context
meaning a package of configuration files, backgrounds, and pixmap
icons) and their availability on the net can give a new user a
head-start.
Perhaps as a reaction to these complex and feature-laden
window-managers another sort of manager has been appearing lately.
Marco Macek's icewm is deliberately not as complex as the above "big"
window-managers but nonetheless has the most commonly needed features
and a moderately configurable appearance. Icewm has been through quite
a few beta versions now and has become remarkably stable. Another
example is blackbox.
Blackbox is a new window-manager written by Brad Hughes. Like icewm,
it was coded from scratch in C++. It's small (the source archive is
just 50 kb.), fast, and has a thoughtfully-designed and pleasing
default appearance. This latter feature has probably contributed to
blackbox's transition from a personal undertaking to an open source
project which has received bug-fixes and enhancements from several
other programmers.
Impressions
Like Windowmaker and icewm, blackbox uses workspaces rather than the
virtual desktop/pager combination familiar to fvwm users. The main
difference between the two methods of managing windows is that the
workspace approach lacks the miniature representations of the various
desktops seen in the pager window. It's really a psychological matter,
and both methods work equally well once habits have been formed. I
surmise that the first virtual desktop system (or even the idea of
iconized windows and window-lists, which serve much the same purpose)
was developed by a programmer who just got tired of shuffling through
layered stacks of windows searching for a certain one.
At the bottom of a blackbox desktop is an immovable multi-purpose bar,
with a workspace menu on the left and a digital clock on the right. In
between is a blank area, which had no function in the earlier betas
but which now contains an iconized window-menu. Here's what it looks
like, with the default colors:
Blackbox toolbar
The gradient shading of the titlebar and toolbar is a nice touch, a
feature usually found only in the more elaborate window-managers. All
graphics routines are handled internally so no extra image libraries
are needed. Blackbox is unusual in that it doesn't use the Xpm pixmap
library, so the only applications which will display an icon when
minimised are those with icons embedded in the executable, such as
Netscape and xv.
Unlike most window-managers the root-window menu is bound to the right
mouse button rather than the left, an arrangement which will be
familiar to icewm and OS/2 users. The menu-items are configured in a
separate file; both the menu and the overall configuration files are
placed in the /usr/X11R6/lib/X11/app-defaults directory, a traditional
location for X resource files. The menu-file's syntax is clear and
easy to use. Here is a screenshot of a menu I've been using:
Blackbox menu
The menu will remain "stuck" to the desktop if it is moved after it
appears and can be dismissed via a right-mouse-button click any time
thereafter.
Keyboard short-cuts are provided for various window operations,
including the Mac-like title-bar roll-up, as well as switching between
workplaces. I am pleased by the relative paucity of key-bindings in
both icewm and blackbox. Some of the larger window-managers have many
key-bindings, some of which conflict with common application bindings.
I've used fvwm2 quite a bit, and it always annoyed me that Netscape's
alt-left-arrow-key key-binding wouldn't work, as it evidently was
reserved for some fvwm function in my ~/.fvwm2rc file, which I never
did get around to tracking down and disabling. You know how it is;
this sort of minor configuration isn't important enough to just drop
everything and fix right now. It's a minor annoyance, but I was
grateful that icewm and blackbox included just a few essential
bindings.
Blackbox is still a relatively young project and the window-manager
isn't completely stable yet. I've had it crash the X-server a few
times, but I've long been in the habit of saving work frequently
(which is always a good idea when running beta software!). Either
icewm or wmx may be a better choice as a lightweight window-manager if
the need for stability is paramount, but blackbox development seems to
be progressing rapidly. More users trying it out and reporting
problems will doubtless speed the process.
The blackbox web-site is the best source of further information and
the latest source archives.
_________________________________________________________________
Larry Ayers
Last modified: Sun 28 Jun 1998
_________________________________________________________________
Copyright © 1998, Larry Ayers
Published in Issue 30 of Linux Gazette, July 1998
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Lesstif: One User's Impression
By Larry Ayers
_________________________________________________________________
One of the main differences between Linux and the commercial Unix
flavors is that the commercial unices commonly come with one version
or other of the proprietary Motif libraries. Motif is basically a
"widget-set", a set of libraries and header-files which give X-windows
applications a characteristic look, including such features as
dialog-boxes, menus, file- and font-selectors, drag-and-drop support,
etc.
There are several free widget-sets which offer roughly the same
functionality, such as GTK, so Motif isn't a necessity for a Linux
system except for one factor. Many of the popular free-software
projects come from institutions such as universities or government
agencies, with a few originating in a commercial or corporate setting.
These institutions often use a commercial Unix and programmers tend
therefore to use the Motif development tools.
A year or so ago I bought a copy of SWIM Motif from the LSL web-site.
There were several software packages I wanted to compile which
required the Motif libraries and header files, such as XEphem, NEdit,
DDD and Vim. The price of a commercial Motif package had been close to
two hundred dollars, but the new SWIM version was selling at that time
for about sixty, so it seemed like a good deal. It's a quality product
and worked well for me until I decided to upgrade my Debian system to
Debian 2.0, which is based on libc6 (as are Redhat 5.0 and 5.1). I
used the handy autoup.sh script, which upgrades the core packages of
the distribution in the proper order. Everything was hunky-dory until
I realized that my proprietary Motif libs were based on libc5 and
won't function in a libc6 environment. The LSL company offers a
thirty-dollar upgrade for customers in my situation, but I felt that
I'd spent enough on what isn't really a necessary software package,
and who's to say whether some future changes in Linux might put me in
the same situation again? Situations like these really make me
appreciate source-code availability!
I'd been hearing favorable reports on the newer versions of Lesstif, a
free and open-source Motif 1.2 clone created by a team of developers
called the Hungry Programmers. The release of the Netscape source
earlier this year had attracted new Lesstif users, as Netscape needs
Motif to build. More users means more bug-reports and probably some
additional programming help; I can't help but think that the new
Netscape situation was a shot in the arm for Lesstif. The Lesstif
releases seem to be more frequent now, for whatever reasons.
I really didn't know what to expect from Lesstif. I remembered reading
usenet postings concerning Lesstif's failures to work with this or
that application and numerous comments on display flaws and other
bugs. These comments were made over a year ago, which is approximately
a decade in "computer time", so I was hoping for at least a marginally
useful product.
The first release I tried was 0.83. To my surprise, it compiled and
installed as easily as any other quality GPL package. Feeling rather
foolish that I'd spent hard-earned cash on a commercial Motif
implementation, I proceeded to re-compile (over the course of a few
weeks) every application which I had previously linked with SWIM
Motif. So far every one I've tried has worked well with Lesstif; some
packages needed the paths to the Lesstif libraries and header-files
specified in the Makefile, but this was the only tinkering I've had to
do. I was particularly pleased that NEdit now works with Lesstif, as
this editor's dependence on Motif has until now hindered its
widespread use by Linux users.
The few bugs I've seen in the Lesstif version I'm using now (0.85) are
minor and have little effect on usability.
One reason Lesstif is important for the Linux community is that its
existence and usability make it possible for the developers of
distributions to package Motif-linked applications without the
necessity of dealing with non-free software. The application
developers can continue to use Motif, while Linux users can still
compile and run the programs without the proprietary libraries.
Jon Christopher, a member of the Lesstif team, has written an essay
about Lesstif's history and prospects which is well worth reading. It
was originally contributed to the Slashdot web-site, and is available
here. The Lesstif web-site has the latest releases and other news.
_________________________________________________________________
Last modified: Sun 28 Jun 1998
_________________________________________________________________
Copyright © 1998, Larry Ayers
Published in Issue 30 of Linux Gazette, July 1998
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Sabre: An Svgalib Flight Sim
By Larry Ayers
_________________________________________________________________
Introduction and Disclaimer
I haven't written about Linux games for the Gazette, mainly because I
don't play them much. Oh, every now and then while waiting for a
download to complete I'll play Xgalaga or XEmacs-tetris for a while,
but for me the real Linux amusement is figuring out how to compile,
install, and use the numerous software packages lurking out on the
net, unpublicized and just waiting to be explored.
A couple of days ago I was reading the current Need to Know British
WWW news site, and I saw a mention of a Linux flight-simulator called
Sabre. I ended up at the Sabre web-site and was impressed by the
evident humor and good-nature of the site's developer (check out the
page describing how to get sound working with the simulator!). Though
I've seldom used flight-simulators, I decided to give this one a try.
Back To The Korean War
The developers of Sabre (Dan Hammer, with assistance from Antti Barck
and David Mansfield) have chosen to confine their attention to the
Korean War, so the aircraft involved are mostly early jets with a
smattering of WWII-era propellor-planes. The graphics are well-done,
with texture-mapped clouds and landscapes. The general effect is
reminiscent of a detailed cartoon (not Hannah-Barbra style!).Here are
a couple of scaled-down screenshots (Sabre can even take its own
screenshots while running; just press e and the current scene will be
saved to a ppm file):
Sabre screen 1 Sabre screen 2
The first public release of Sabre was in August of 1997, so it's a
relatively new project. Don't expect a state-of-the-art flight-sim
like the numerous commercial products available. Sabre is more similar
to better-quality DOS flight-sims of a couple of years ago. The
up-side to this is that expensive hardware (such as an ultra-fast
processors or a 3DFX video-card) isn't needed in order for Sabre to
run acceptably fast. This is an Svgalib console-graphics program so
not even X is needed.
Sabre can be run in a variety of resolutions and window-sizes.
Naturally a fast CPU will enable a larger and more detailed screen
with minimal choppiness.
Frankly, I probably never would have written this review if Antti
Barck's tremendously useful dialog-based script RunSabre hadn't been
included in the distribution. Flight-simulator veterans probably will
be able to learn to use Sabre without this script, but novices (like
me) will find this interface to Sabre invaluable. It provides a
convenient way to set the screen resolution, run various demo missions
and flight scenarios, and access the documentation (especially the
key-binding doc) from one menu-based screen. All of these tasks can be
accomplished with command-line switches, but who wants to learn these
while still deciding whether it's worth devoting time to learning a
new application? Without this script, running Sabre can be a
frustrating sequence of short flights followed by re-reading the docs
after watching your jet crash yet another time.
Sabre offers quite an extensive array of view-points from which to
observe your fighter-plane and the surrounding action. Naturally you
can be in the cockpit and see forward, to the side, and behind, but
you can also become a disembodied viewer off to one side. Even more
interesting, a click of a key will put you in the cockpit of one of
the enemy planes.
Your plane can be controlled with either a mouse, the keyboard, or a
joystick (assuming joystick support is compiled into your kernel). I
found controlling with a mouse difficult, whereas after some practice
the keyboard seemed to provide more accurate control. I don't have a
joystick so I was unable to try that method; I understand that
flight-sim enthusiasts prefer them.
The first scenario in the RunSabre menu is called Just Fly. I was
grateful for this choice; the last thing I needed while trying to
figure out the controls was harassment by MIG fighters intent on my
destruction! Several other flight scenarios are supplied, some
involving aerial combat and others ground attack missions. These
scenarios are interactive; the demo missions are more like short
movies which display the variety of scenes Sabre is capable of
displaying.
All in all Sabre is a quality piece of software. It compiled easily
and I found no obvious bugs. The source or pre-compiled binaries can
be obtained from the Sabre web-site linked at the beginning of this
article.
_________________________________________________________________
Last modified: Sun 28 Jun 1998
_________________________________________________________________
Copyright © 1998, Larry Ayers
Published in Issue 30 of Linux Gazette, July 1998
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
SFM: A New GTK-Based Application
By Larry Ayers
_________________________________________________________________
Introduction
As the GTK GUI programming toolkit matures more developers have been
inspired to use it for the visual presentation of their programs.
Pascal Rigaux, a French programmer, has come up with a small
file-manager he calls sfm. Sfm isn't quite as simple as the name and
initial appearance imply; it has a remarkably full feature-set for
such a small program.
There has been a long succession of X Windows file-managers which use
various icons to represent different types of files. This approach can
be useful for people accustomed to a Macintosh or Windows environment,
where this type of file-manager is common. These icons do have
drawbacks, though, as fewer files will fit into a single display
window which results in much more scrolling to find a particular file.
The impact on system resources is considerable as well, as the X
server is called upon to constantly update the display, and memory
usage is much greater than what is needed by a text-based manager. In
the end it's just a matter of preference.
Sfm is unusual in that it is an X-only file-manager which is also
text-based (FileRunner is another). It also goes against the general
trend towards mouse-based applications in that the keyboard interface
is well-developed.
Appearance and Features
The default window size is rather small; my first impression was that
this was a trivial application, probably a first GTK programming
exercise without much utility. As I explored further (and actually
read the README file!) I found that sfm's uncluttered appearance
conceals an interesting and useful approach to the perpetual effort to
contrive a useful interface to the ls utility. In the screenshot below
I've enlarged the default window by about one-third:
sfm window
The above window is rather plain. The interesting part is the
right-mouse-button menu which offers a plethora of actions which can
be performed upon the highlighted file, along with a submenu offering
less-used possibilities. I wanted a screenshot showing the basic sfm
window with both menus fanned out from it. I don't know whether it is
an idiosyncrasy of sfm, GTK, or xv (which I used for the screenshots),
but while I was able to get shots of either menu by itself, I couldn't
get them all in one screenshot. So here are the main menu and its
submenu; try to imagine them connected to the first screenshot above:
first sfm menu
This is the submenu stemming from the "more" item:
sfm sub-menu
As you can see, the keyboard shortcuts for all of the various
menu-items are shown to the right of the action menu-entries. This is
a great help in learning the key-bindings, which are designed to be
intuitive and similar to those of many other programs. I especially
like the Lynx-style left-and-right arrow-key directory navigation (the
mc file-manager offers this as an option).
Multiple sfm windows can be opened at once and files can be easily
copied or moved between them.
Sfm uses a configuration dot-file (~/.sfm) in order to determine the
action to take upon a highlighted file when either the enter key, the
right-arrow-key, or a single left-mouse-button click is received.
Surprisingly, this is one dot-file you won't have to edit, as it is
auto-generated. The first time you select, as an example, a text file,
a dialog box pops up asking what action you'd like to take, such as
editing it with your favorite editor. That preference is then recorded
in the ~/.sfm file; the next time a text file is selected it will be
loaded into your editor. Sfm uses the standard Linux file utility to
determine file-types. This is quite a nice feature, especially for new
Linux users who have enough to do just becoming comfortable with the
system without constantly needing to chase down and edit config files.
Sfm is still in its early days, but judging by the intelligent design
of the current version, it's likely that further improvements are in
the offing. The current version (1.4 as I write this) is available
from the Sunsite archive; an alternate site is here.
_________________________________________________________________
Last modified: Sun 28 Jun 1998
_________________________________________________________________
Copyright © 1998, Larry Ayers
Published in Issue 30 of Linux Gazette, July 1998
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Portable GUI C++ Libraries
By Sean C. Starkey
_________________________________________________________________
Most well-written, non-graphics C++ code is portable, but major
problems occur when one tries to write portable applications for
graphical user interfaces. On Linux, the X Window System is used as
the major graphical user interface. GUI code written on Linux will not
work on MS Windows. Even though we all know that Linux is the better
of the two, some Linux developers would like to also support MS
Windows with its large number of users.
With a portable GUI C++ library, source code developed under Linux and
X can be compiled for other platforms, including MS Windows. Quite a
few GUI C++ libraries are available at this time, including MFC and
OWL for MS Windows. Unfortunately, none of these libraries are
portable to both X and MS Windows.
Desirable features in a portable GUI library include the following:
* Support for a large number of platforms
* Easy to use, with good documentation
* A good set of widgets (or controls, in Microsoft-speak) to use in
your interface
Other desirable features which are not GUI related are portable
network functionality, file I/O capabilities and some good container
classes.
What are some of the problems faced by a portable GUI library? One is
that GUI code on different platforms varies widely. To create a new
window, MS Windows uses a completely different command than X, even
though the code uses the same programming language in both. Another
problem is subtle changes in event handling. All GUI applications are
event driven, but the events are different on different platforms. The
portable GUI library must take all of these differences into
consideration and supply a common interface for all platforms.
I have reviewed three different GUI C++ libraries which support both X
and MS Windows. All are free of charge with no royalties. The source
code for these libraries is available on the web sites cited in
Resources.
Since these libraries are written in C++, you have all the advantages
of object-oriented design. To create a new window, you derive your own
window class from the main window class. After adding the appropriate
code to handle events in your window, it is finished.
wxWindows
wxWindows is by far the most active of the libraries available.
wxWindows was originally developed by Julian Smart, but has received
contributions from many others. The version of wxWindows reviewed in
this article is 1.68B. Version 2.0, a major rewrite of the library, is
rumored to be available in ``the near future''.
wxWindows is a very modular project. The main version is available on
the web site. In addition to the main version, there are also many
subprojects. Some of these subprojects include additional widgets, an
Xlib library port, a Macintosh port and many others. These subprojects
are described on the wxWindows web site (see Resources).
Note that the main version of wxWindows requires the Motif toolkit.
Motif is not free; therefore, most Linux installations do not include
it. Lesstif, a popular Motif clone, compiles and works with wxWindows.
There is also a side project which uses only standard Xlib libraries
so that wxWindows does not need Motif.
wxWindows has many features. Figure 1 is a screen capture of the
sample program distributed with wxWindows running on a Linux system.
Notice all of the widgets available to the programmer. More screen
shots are available on the wxWindows web site.
[INLINE]
Figure 1
One of best features of wxWindows is the on-line documentation. The
documentation comes in HTML, LaTeX and MS Windows help format. There
is also a very active mailing list for wxWindows, where many questions
can be answered. Trying to learn all of these new classes can be
confusing, and wxWindows does a good job of describing them.
If you don't want to download all of the wxWindows source code, a
distribution on CD is available. See the web site for more details.
V
V is another freely available library, and was developed by Dr. Bruce
E. Wampler. [See ``V--A Free C++ GUI Framework for X'', Linux Journal,
December 1996.] It is able to compile on X and MS Windows. It is a
complete library, but does not have all of the fancy controls that
wxWindows has. Although not as fancy, in my opinion, V's source code
is better written and easier to understand than wxWindows.
V does not require the Motif libraries to build and run. All V source
code uses pure Xlib library calls, so it should be able to compile on
any Linux system with no difficulty.
V has quite a few widgets available as well. Figure 2 is a screen shot
of an example program distributed with the library. This look is
consistent on all platforms.
[INLINE]
Figure 2
YACL
Another library worth looking at is YACL (Yet Another Class Library).
The author, M. A. Sridhar, reports that YACL can compile on X, MS
Windows and OS/2. Unfortunately, it looks as if progress on YACL has
been nonexistent since late 1996. The current version of YACL, 1.60,
is close to complete with a good set of classes and widgets.
[INLINE]
Figure 3
Figure 3 shows a screen shot of an example program distributed with
the library. This example shows some of the graphics primitives
available with YACL. YACL also has all of your basic widgets, such as
buttons, menus, choices and radio buttons.
One of YACL's biggest drawbacks is a lack of documentation. There is a
book about YACL: Building Portable C++ Applications with YACL,
Addison-Wesley, 1996. I would suspect this book has more information
than the documentation distributed with the library.
If you would like more information on any of these libraries, please
see their web page listed in the Resources table.
Resources
wxWindows: http://web.online.co.uk/julian.smart/wxwin/
V: http://www.objectcentral.com/vgui/vgui.htm
YACL: http://www.cs.sc.edu/~sridhar/yacl/
User Interface Software Tools:
http://www.cs.cmu.edu/afs/cs/user/bam/www/toolnames.html
A good page for some other GUI libraries that are not necessarily
free, in C++ or supportive of the X Window System.
_________________________________________________________________
Copyright © 1998, Sean C. Starkey
Published in Issue 30 of Linux Gazette, July 1998
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Using Linux instead of an X Emulator
By Al Koscielny
_________________________________________________________________
Sometimes you must have X on the desktop--at work, that is. At home,
you would have several choices from your well-appointed stable of
Linux ponies. Work is another story--you have the corporately
sanctioned productivity tool running on your desktop, and adding an X
emulator application is going to cost somebody some money. You blew
your software budget for the year on compilers. So what are you going
to do now?
Today's typical work environment includes a desktop PC running a
version of the Windows operating system. If the job is writing memos
and sending around Word documents, that's a reasonably adequate
solution. If the job is developing and testing cross-platform GUI
applications, then running an X application from your desktop is an
effective way to get the work done.
The Trafvu Project
Trafvu is an application for displaying results from traffic
simulation models. Such models can be used for planning and design
purposes. For example, suppose the city fathers attract the Olympics
for the year 2004. What changes need to be made to the traffic system
so that the city does not suffer from massive gridlock during the
Olympics?
Trafvu was developed in C++ to run on Windows and the X Window System,
with XVT being used as a cross-platform tool. For a software
development lab, we had several Pentiums running Windows 95 and NT and
some Sun workstations running Solaris 2.5. Generally, everyone on the
project had a desktop system, and the lab was used for collaborative
efforts, such as fixing bugs, and for design meetings. Several of the
lab machines were set up as file servers. The application source code
was maintained on a Windows NT server using the Mainsoft Sourcesafe
software revision control product. Typically, the project source code
were extracted to a local machine (lab or desktop), some source files
checked out, and C++ code developed or modified and tested; then the
modified files were checked back in.
Initially, when porting code from Windows to X, files were transferred
to a UNIX workstation using FTP, followed by a repeat of the compile,
link, test and debug cycle. As the project progressed, SAMBA was
installed on the Sun workstation, so that developers could access
their home directories on the Sun workstation from the Windows
browser. Then files could be extracted directly to the file system on
the workstation. Opening a TELNET session from the Windows PC to the
Sun workstation permitted concurrent compilation and linking on both
Windows and X.
At this point in the cycle, the source code had been modified,
compiled, linked and tested under Windows. Now we needed to test it on
the X Window System. We needed a way to run an X application from our
desktop PC.
X Emulator Options
There are several ways to provide X on the desktop. A few years ago,
X-terminals were very popular. An X-terminal typically has nice real
estate (i.e., a large screen), some memory, no local disk space and
costs about $1000 to $2000 (most of the expense is that nice monitor).
At boot time, it loads an OS from a boot server, so setting up the
boot server becomes the headache. As PC processors became cheaper and
more capable, the price of hard drives fell through the floor, so the
PC desktop became very popular. Typically, PCs run a version of the
Windows operating system. Using them as X terminals requires
additional software for X emulation. For example, with Hummingbird's
Exceed, a typical X emulator, you can run your favorite X application
on a convenient UNIX workstation and have X display on your desktop
computer. X emulation products for Windows generally cost a few
hundred dollars per machine. Currently, network computers (NCs) are
being pushed as a solution to the software application configuration
nightmare brought on by the proliferation of desktop PCs, and typical
prices seem to be about $700US per unit.
Several options are available and if a few hundred dollars is not a
concern, the X emulator application is probably the ticket. If, on the
other hand, no one will sign the purchase request or many machines
need the capability, then a cheaper option is needed.
Linux to the Rescue
With hardware prices falling and Windows applications becoming more
bloated, there's usually some older hardware sitting around unused.
Who wants to attempt to run Visual C++ 5.001a under Windows 95 on that
486/66 with a 1 GB hard drive? I won't volunteer. Next question--what
OS runs X quite comfortably on a 486 with 16MB of memory? The answer
is Linux.
Thus, an alternative to the X emulation application is running Linux
on the PC. Set up the PC as a dual boot machine and simply boot Linux
to run X applications on the desktop. The advantage of using Linux is
that no purchase requests have to be signed. Just bring the CD-ROM
from home, find some free time and disk space and install it. The
disadvantages are finding the time to do the installation and the need
to boot between running Windows or X. The hurdles in the process are
finding about 300MB of spare disk space and a three-button mouse.
Setting up Linux
Initially, my company installed Linux 1.2.13 on a Gateway 486/66 and a
no-name clone 486/66. There are numerous resources on installing
Linux, if you need that information. However, having copies of books
such as Running Linux and Linux Network Administrator's Guide was
essential for us. Internet access for HOWTO documents can also be
helpful. Installing on the Gateway and no-name were mostly
straightforward. The Gateway did not have a CD-ROM drive, so the
CD-ROM was exported from one of the Sun workstations. The Slackware
distribution has the option of installing over a network and this
worked well. Both the Gateway and the no-name had SCSI cards, and
additional SCSI disks were salvaged from other machines for installing
Linux. Setting up XFree86 on the no-name was a chore because of the
video card. Generally, the cheaper a PC, the less documentation is
provided with it. So putting up X on the no-name took quite a bit of
experimentation. The Linux multiple console capability is very handy
when installing XFree86. ctrl-alt-F1 brings up the screen used to
start X (use startx command) to look for error messages. alt-F7 gets
you back to your X session, and ctrl-alt-F?, where ?=2, 3, 4, 5 or 6,
gets another login session for checking log files, etc.
Later in the project we installed Linux 2.0.0 on a SAG Electronics
dual-processor Pentium Pro 200 MHz with an Imagine Number Nine 128
Series II video card with 4MB. The Pentium Pro came with a 4MB hard
drive, and a 400MB partition was allocated for Linux. Installation on
the Pentium Pro machine was more difficult because the hardware was so
new. Video drivers for the Number Nine card were in beta, and the
generic SVGA drivers wouldn't work. Upgrades to XFree86 took about
10-20MB of downloading from http://www.xfree86.org/ and perhaps a
couple of hours to install and test. The three-button mouse on the
Pentium Pro insisted on being difficult, but this was remedied by the
advice in the three-button mouse mini-HOWTO.
I tried the two-button mouse emulation and found it to be just good
enough to get me in trouble. I would think I had the timings down,
roll into an xterm with the root prompt and paste the equivalent of
War and Peace in at the command line. (Gee, I hope I didn't do
something like rm-rf/ in that session.) I did find coworkers who were
willing to trade a Logitech three-button mouse for my two-button
mouse. Once one is used to the X version of ``cut and paste'', it is
very difficult to do without it.
The three PCs were set up with dual boot capability. Initially, we
just used a floppy boot disk for Linux, since making one is easily
accomplished, and the MBR (Master Boot Record) for Windows remains
intact. Later, when Linux had proven itself useful and we were
interested in convenience, we added LILO to the MBR. The PCs were
frequently used in Windows to edit documents, prepare spreadsheets,
etc. It was very handy to access these files without having to boot
Windows. Accessing Windows files while the PC is running Linux can be
done using SAMBA. For FAT file systems, set up a mount point in
/etc/fstab with a file system type of msdos in order to make the
Windows file system fully accessible while Linux is running. Install
SAMBA on the Linux machines, export the Windows file system through
the smb.conf configuration file, and then you can access the files
through the Windows browser (File Manager or Explorer, depending on
your Windows flavor).
It's encouraging to see file system drivers for FAT and HPFS, since
accessing the files from the other operating systems is very
convenient while running Linux. However, with current hard drive
sizes, FAT is outdated and offers very little security. Microsoft
offers some alternative file systems, such as VFAT and NTFS. However,
it appears that specifications for these files will remain exclusively
with Microsoft. So, although work is in progress on the NTFS driver
for Linux, I don't think NTFS support under Linux will be available
any time soon. Perhaps a better design choice is to minimize the usage
of proprietary file systems on multi-boot machines.
Typically, the Linux PCs were used for an X-terminal login to the Sun
workstations. To make this convenient, the ``Goodstuff'' button bar
was used. The environment variable DISPLAYHOST was set in this way:
export DISPLAYHOST=vader:0
This environment variable is used when using rsh to get to an xterm on
the Sun workstation. The .fvwmrc file with the FVWM window manager has
several samples, so just fill in appropriate values for the remote
host and the $DISPLAYHOST. Getting the GoodStuff button to work can be
a chore if something is wrong with the setup. Start by testing with a
simple command:
rsh remote-host date
Once this works, typing rsh xterm should also work. Having a single
button set the DISPLAY variable and also start the remote session
prevents a nusiance console display when DISPLAY is set to the default
value of 0.0.
A side benefit of installing Linux is backing up the file system over
the network. A PC usually doesn't have a tape drive, whereas a more
backup-conscious Sun workstation may have a 5GB DAT drive. From the
Linux PC, the dd command with the appropriate arguments will back up
your hard drives to a tape drive on a remote workstation. A crontab
entry is good for this type of backup for nonwork hours, so that
network bandwidth impact is minimized.
Synopsis
There is a steep learning curve to installing Linux, and my initial
installation of Linux took several days. Recently, I installed
Slackware 3.2 (2.0.29 kernel) in about two hours, which included
bringing up X and restoring home directories. Recent efforts at
improving Linux's ease of use have been well spent and make Linux a
more viable alternative for use at work.
A spare X terminal is very handy to have around when debugging an
application. It is possible to stop events from getting to the
debugger on the Sun workstation, so that the console is essentially
locked. However, if there's a free X display, set the DISPLAY variable
there before running a command-line debugger.
Booting multiple operating systems is an interesting twist on
cross-platform application development. If I could have built the
trafvu application using the GNU compiler (some issues with the Rogue
Wave libraries precluded this), I could have used a single PC for both
Windows and X development and testing.
We have used Linux and XFree86 on a daily basis for over a year and
have been impressed with the solid performance.
_________________________________________________________________
Copyright © 1998, Al Koscielny
Published in Issue 30 of Linux Gazette, July 1998
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
USENIX 1998
By Aaron Mauck
_________________________________________________________________
Photo Album
_________________________________________________________________
Each year, the USENIX organization (http://www.usenix.org/) puts on a
technical conference dealing with UNIX and other UNIX-like systems.
This year they had an emphasis on free or Open Source operating
systems, primarily Linux and *BSD. The conference was held in New
Orleans, Louisiana from June 15th to the 20th.
Tutorials
Many day-long tutorials were offered on Monday and Tuesday including
``Inside the Linux Kernel'' by Stephen Tweedie, one of the EXT2
developers, and several talks on Networking and Security. I attended
``Hot Topics in System Administration'', taught by Treni Hein and Evi
Nemeth. They covered many topics including Samba, Packet Filtering and
IPv6.
Vendor Expo
I found it refreshing to see a vendor exposition (albeit a small one)
comprised completely of UNIX-friendly companies. O'Reilly was there,
displaying all of their titles for sale at 20% off. Needless to say,
this made it one of the most popular booths. Most of the faces were
familiar: Red Hat, Linux International, InfoMagic, the three heads of
BSD and others. Among the unexpected participants was the FBI, just a
short distance from the Free Software Foundation. The whole atmosphere
of the exposition was quite relaxed, without the hectic feel of Comdex
and other large industry trade shows.
BOFs and Speeches
Each evening offered several talks by different people on a wide range
of subjects. I caught ``The State of Linux'' talk by Linus Torvalds on
Thursday afternoon. He set Aug/Sep 98 as a hopeful release date for
the 2.2 kernel. Another event that took place every evening was the
``Birds of a Feather'' (BOF) meetings, which were designed as a place
for people with common interests to come together and discuss their
ideas and goals. It was also a great place to rub shoulders with some
of the ``big names'' in the UNIX community, such as Keith Bolstic,
Eric Allman and Jon ``maddog'' Hall.
Terminal Room
What UNIX conference would be complete without a terminal room?
Luckily, Earthlink and openBSD donated machines and bandwidth and
created a room with thirty or so machines running openBSD, connected
to a T1.
Summary
If I were to do it all over again (and I most definitely want to), I
would spend more time planning what I want to learn. I was a bit
overwhelmed by the sheer number of talks/events, and therefore found
it difficult to focus on exactly what I wanted to get from the
experience--I was constantly spreading myself too thin. For any UNIX,
Linux, BSD etc. lover, USENIX is a must at least once in a lifetime.
It is a very friendly and co-operative environment and has definitely
earned its reputation as one of the hubs of the computing community.
_________________________________________________________________
Copyright © 1998, Aaron Mauck
Published in Issue 30 of Linux Gazette, July 1998
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
Linux Gazette Back Page
Copyright © 1998 Specialized Systems Consultants, Inc.
For information regarding copying and distribution of this material see the
Copying License.
_________________________________________________________________
Contents:
* About This Month's Authors
* Not Linux
_________________________________________________________________
About This Month's Authors
_________________________________________________________________
Larry Ayers
Larry lives on a small farm in northern Missouri, where he is
currently engaged in building a timber-frame house for his family. He
operates a portable band-saw mill, does general woodworking, plays the
fiddle and searches for rare prairie plants, as well as growing
shiitake mushrooms. He is also struggling with configuring a Usenet
news server for his local ISP.
Jim Dennis
Jim is the proprietor of Starshine Technical Services. His
professional experience includes work in the technical support,
quality assurance, and information services (MIS) departments of
software companies like Quarterdeck, Symantec/ Peter Norton Group, and
McAfee Associates -- as well as positions (field service rep) with
smaller VAR's. He's been using Linux since version 0.99p10 and is an
active participant on an ever-changing list of mailing lists and
newsgroups. He's just started collaborating on the 2nd Edition for a
book on Unix systems administration. Jim is an avid science fiction
fan -- and was married at the World Science Fiction Convention in
Anaheim.
Norman M. Jacobowitz
Norman is a freelance writer and marketing consultant based in
Seattle, Washington. Please send your comments, criticisms,
suggestions and job offers to normj@aa.net.
Al Koscielny
Al is a Systems Engineer with Resource Solutions International. In his
spare time, he plays with Linux, reads Usenet, rides an ATB
(all-terrain bike) and enjoys cooking. He wishes to acknowledge the
contributions of Nacho, a big yellow tabby, to this article. He can be
reached at koscieln@interpath.com.
Mike List
Mike is a father of four teenagers, musician, and recently reformed
technophobe, who has been into computers since April,1996, and Linux
since July, 1997.
Aaron Mauck
Aaron is the System Administrator at SSC.
Gerd Mueller
Gerd's first computer was a Amiga 500, but since 1996 he works with
Linux. Few weeks ago he has finished his studies of computer science.
Currently he spends most of his time with developing WipeOut at
softwarebuero m&b. He can be reached at gerd@softwarebuero.de.
David Nelson
David manages scientific research at the U.S. Department of Energy.
Before that he earned his living as a theoretical plasma physicist. He
started programming on the IBM 650 using absolute machine language and
later graduated to CDC, DEC and Cray machines for his research. But
Linux is the most fun. He and his wife, Kathy, live near Washington
DC; they enjoy tennis, skiing, sailing, music, theater and good food.
David Penland
David has been using linux since he first encountered the sls
distribution in the Autumn of 1992. He works as an AIX systems
administrator for Unifi, INC. in Greensboro, North Carolina. He is
married to Angel Penland, and they share a house with 2 dogs and 4
cats. He can be reached at dpenland@mindspring.com.
Eric S. Raymond
Eric is a semi-regular contributor to Linux Journal. You can find more
of his writings, including his paper ``The Cathedral and the Bazaar'',
at http://www.ccil.org/~esr/.
Jan Rooijackers
Jan is an employee at Ericsson Data Netherlands BV (DSN). He came in
contact with UNIX in 1991 and is now working in the Internet/Intranet
business. Outside work, Jan spends time with his family and computers.
He can be reached at Jan.Rooijackers@dsn.ericsson.se.
Sean C. Starkey
Sean has been a Linux user for over four years. His first Linux system
had 0 as the major version number and came on floppy disks. If you
would like to know more about Sean, feel free to visit his web site at
http://rmi.net/~starkey/. He can be reached at starkey@rmi.net.
Alex Vrenios
Alex is a Lead Software Engineer at Motorola and has his ows
consulting business. He is always taking some sort of class. He just
finished the class work toward a Ph.D. in computer science, but only
time will tell if it goes any further. His wife, Diane, is certainly
his best friend and biggest fan. He enjoys his two Schnauzers, Brutus
and Cleo, and his dozens of African Ciclids, too. He is a licensed
amateur radio operator, as is Diane, and they spend more than a few
nights together observing the skies through their 5-inch telescope.
They like to get out and stay active, to enjoy life together.
Robert Wuest
Robert is an Electrical Engineer with Kemet Electronics in equipment
engineering. He lives in the US but works in Mexico. He plays with
computers there, developing software using Linux (for embedded 6809
systems). This article results from his current project, an instrument
which will use a PC104 computer running Linux. He is building the
chassis in AutoCAD using full 3D to place PC boards, relays and a lot
of connectors. He really wishes he could do that in Linux. He uses
Tango for DOS for circuit design and PCB layout. He wishes he could do
that in Linux also.
_________________________________________________________________
Not Linux
_________________________________________________________________
Thanks to all our authors, not just the ones above, but also those who
wrote giving us their tips and tricks and making suggestions. Thanks
also to our new mirror sites.
This last month I've just been working, working, working -- no time
for fun. Riley is off on our annual motorcycle trip without me; he's
exploring Utah and Arizona, all our favorite parks. So at least one of
us is having fun. :-)
Actually, I'm having fun too. Working on LG always seems more like fun
than work and the same is true for Linux Journal. I've also been doing
some exploring of areas surrounding Seattle with my father-in-law, who
just moved up to this area. We had a two hour ferry wait last Saturday
that was frustrating yet comfortable because of the company. I think
having nice in-laws is a definite plus in life. At any rate, we've
seen some beautiful scenery, including a trip to Snoqualmie Falls and
one to the Olympic Peninsula.
Have fun!
_________________________________________________________________
Marjorie L. Richardson
Editor, Linux Gazette, gazette@ssc.com
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back
_________________________________________________________________
Linux Gazette Issue 30, July 1998, http://www.linuxgazette.com
This page written and maintained by the Editor of Linux Gazette,
gazette@ssc.com