|
|
|
Our sponsors make financial contributions toward the costs of publishing Linux Gazette. If you would like to become a sponsor of LG, e-mail us at sponsor@ssc.com.
Linux Gazette is a non-commercial, freely available publication and will remain that way. Show your support by using the products of our sponsors and publisher.
|
The Answer Guy |
TWDT 1 (text)
TWDT 2 (HTML)
are files containing the entire issue: one in text format, one in HTML.
They are provided
strictly as a way to save the contents as one file for later printing in
the format of your choice;
there is no guarantee of working links in the HTML version.
Got any great ideas for improvements? Send your comments, criticisms, suggestions and ideas.
This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com
The Mailbag!Write the Gazette at gazette@ssc.com |
Contents: |
Date: Sun, 04 Oct 1998 16:04:47 -0500
From: "Casey Bralla", Vorlon@pdn.net
Subject: Single IP Address & Many Servers. Possible?
This is for the "article wanted" section of the Linux Gazette. Thanks!
I have a single IP address for accessing the Internet. I have an Intranet with several old 486-class computers which all access the Internet via IP Masquerading. The single machine which is actually connected to the Internet (and does the masquerading) is not powerful enough to run a news server, mail server, HTTP server, etc. I would like to split these functions up among the cheap low-cost computers I have lying around. How can I force HTTP web pages to be serviced by the HTTP server even though it is not directly connected to the Internet with an IP address?
Example Diagram below:
207.123.456.789 (Single IP address to the Internet) | | 486 DX/2-66 (IP Masquerading) | | 486 DX-33 Mail Server 192.168.1.1 | | K-5 133 HTTP Server 192.168.1.2 | | 486 DX-33 Leafnode News Server 192.168.1.3 | | (Other local machines)I want anyone on the Internet who accesses my web server by accessing 207.123.456.789 to be directed to the computer at 192.168.1.2 on the Intranet. (obviously, the Intranet users have no problem accessing the correct machines since they just reference the local 192.xxx.xxx.xxx IP address. But how can I make the same functionality available to the rest of the known universe?)
Casey Bralla
Date: Wed, 7 Oct 1998 15:40:06 -0500
From: "John Watts", watts@top.net
Subject: Missing network card
I've installed (from diskette) Debian 2.0 (hamm) on a system at work. The idea was to set it up as a file/print server for my department. Unfortunately, Linux doesn't believe me when I tell it that there is a network card. Its the EtherExpress 16. I've tried reinstalling and autoprobing, no luck. I've tried different Linux distributions, no luck. HELP!!!!!!!!!!!!
John Watts
Date: Tue, 06 Oct 1998 21:36:12 PDT
From: "Jonathan Bryant",
jonathanbryant@hotmail.com
Subject: Linux Extra?
I've been trying to encourage my Dad to try Linux. He has showed interest, but was curious if there was a Linux counterpart to Extra! on Windoze. He does a lot of work on the mainframe and needs something that can provide a "3270 terminal interface" for a "TSO session". I wondered if there are any old school programmers out there who can recommend a piece of software which would suit his needs.
Thanks
Jonathan Bryant
Date: Fri, 09 Oct 1998 08:45:50 -0400
From: "Brian M. Trapp",
bmtrapp@acsu.buffalo.edu
Subject: NumLock - On at startup?
Hi! I've been reading the Linux Gazette for almost a year now. NICE WORK!!! You're a great resource.
Here's my quick and probably easy question.. On reboot (yes, I do that occasionally, just to use Win95 and Quicken) Linux (Red Hat 5.1) defaults to starting up with Num Lock off. How can I get it to switch it on for me automatically? (This is a matter of pride - I made the mistake of telling my girlfriend how great and powerful the OS is, and then she has to discover the num lock quirk for me...)
Thanks!
Brian Trapp
Date: Fri, 9 Oct 1998 09:47:05 +0800
From: "ctc",
zhanghongyi@163.net
Subject: Where to find S3 ViRGE GX2 card driver for Linux
I use S3 ViRGE GX2 video card in my computer, but I cannot run startx. Do you know where I can find drivers for this kind of card? Any information is greatly appreciated. Thanks.
Zhang-Hongyi
Date: Sun, 11 Oct 1998 16:38:00 -0700
From: Ed Ewing,
edewing@isomedia.com
Subject: article idea
An article regarding cable modems and security, multiple interfaces etc.
Thanks
Ed
Date: Sun, 11 Oct 1998 10:47:09 +0200
From: "P.Plantinga",
plant@cybercomm.nl
Subject: drivers savage Linux
Are there drivers for my savage for Red Hat 5.1 xwindows? If there are any, please let me know where to get them.
Thanks
P.Plantinga
Date: Sat, 10 Oct 1998 04:23:56 -0400
From: Eduardo Herrmann Freitas,
efreitas@winnie.fit.edu
Subject: Ensoniq Audio PCI Sound Card
I would like to know if it is possible to install an Ensoniq Audio PCI Sound Card on Linux...
----
Eduardo
Date: Mon, 12 Oct 1998 14:01:07 -0400
From: "Mann, Jennifer",
Jennifer.Mann@GSC.GTE.Com
Subject: looking for information
Hi. I am looking for information about how Linux handles transactions and database support. Has the Linux Gazette published any articles pertaining to this topic? If so, I would like to know if and where I can find those articles on the web.
Thank you,
Jennifer Mann
Date: Thu, 15 Oct 1998 09:01:46 -0500
From: "Mark Shipp(Soefker)",
mshipp@netten.net
Subject: Confused with ProComm scripting
I got to your web site through a search on Yahoo. I must say that your help is a very valuable resource.
The reason that I'm doing this search is because I'm looking for someone with experience with the Aspect scripting. Could you or someone that you know steer me in the right path?
What I'm trying to do is create a counter that transmits its value in order to open to different nodes on a network. Below it the part of the program that is giving me the problem. It works except for the fact that I have to use the "TERMMSG" command instead of a "TRANSMIT". This won't work because the "open 0,(value)" statement has to be transmitted across the LAN.
Thanks for your help and time,
Mark
proc main integer unit while unit !=3D 3 ; This means "while unit does *not* equal 3". unit++ ; Increment the value of counter (add 1 to it) termmsg "open 0,%d" unit transmit "^M" ;This is where I would add in my other programming pause (2) endwhile ; When unit equals 3 proceed, else count unit and restart ; This is where I would close the network endproc
Date: Wed, 14 Oct 1998 15:29:49 +0000
From: "J luis Soler Cabezas",
jsoca@etsii.upv.es
Subject: I need info
Hello, I have a TX pro II motherboard with an VGA onboard video chip. The problem is that Linux X86config X-Window subsystem doesn't recognize this video, the fact is that Linux can't access to Emulated video RAM.
I'm waiting for your news, and please, excuse my English.
----
Luis
Date: Mon, 19 Oct 1998 08:31:49 -0700
From: Ken Deboy,
glockr@locked_and_loaded.reno.nv.us
Subject: Win95 peer-to-peer vs. Linux server running Samba
I'm wondering if anyone can tell me the advantages of a Linux machine running as a print server for a network of Win95 machines vs. just hang- ing the printer off one of the Win95 machines and setting them up in a peer-to-peer arrangement. You don't have to convince me, because I _do_ run Samba as my print server, but what can I tell my friends to convince them, especially if they aren't having too many problems with their Windoze machines? Thanks for any comments, but no flames...
Ken Deboy
Date: Sun, 18 Oct 1998 18:03:57 -0400
From: "Gregory Engel",
rengel1@nycap.rr.com
Subject: How to add disk space to RH 5.1?
I am a new Linux user having installed Red Hat 5.1 last month. (So far so good) After installing several goodies and libraries like qt I find myself running out of disk space on my / directory. I have a Syquest EZ-flyer removeable disk drive that I didn't use at all during the install.
My question is can I move some of the directories that defaulted to the root directory like /tmp/ and /var/ to this drive without a full re-installation, and if so, how. Also I really couldn't figure out how to get the thing working during install. It is a SCSI drive that connects to the parallel port. Red Hat lists it as a supported drive but was of little help when I asked them for specific instructions.
If there is some other strategy I might use to gain disk space without a re-installation I would like to hear it. I'm still amazed I got the thing going in the first place. The partitioning makes me nervous.
Thanks,
Greg Engel
Date: Tue, 20 Oct 1998 19:50:58 -0700
From: Michael McDaniel,
mmcdaniel@acm.org
Subject: imapd
I have found a lot of information about using clients with IMAP servers. I have found basically _nothing_ about how to actually make the imapd server on Linux do anything.
I can point NetScape Messenger at the localhost IMAP server and it (NS) dutifully says "no messages on server". Ok, I know that; how do I get messages on it?
My Suggestion:
Provide an article about imapd - how to set up hosts.allow for security, how to configure sendmail.cf to use it (I'm pretty sure this has to be done), how to set up user mailboxes, etc.
I would love to see an article like this. By the way, how can I be automatically notified when a new issue comes out? I thought I was receiving that information but maybe not - I haven't seen any info about the new articles as they come out lately.
Michael McDaniel
Date: Mon, 26 Oct 1998 02:27:44 -0500
From: "Oblivion",
garymc@mail.portup.com
Subject: Help, with Debian 2.0 install from CD-ROM not part of HDD
card
I am having problems with Debian 2.0 to install the the important, extra, and/ or packages, which include the kernel source and patches. I have got a operating system, but it does not recognize the CD-ROM drive, thus I can not add or upgrade any program packages to the system. I have tried to move the CD-ROM drive to run off the HDD control but the system will not even do look at the BIOS to startup. I am including at the base of this message the system specs. of this machine.
CPU: Cyrix 5x86 100MHz Hard Drives: BigFoot 1.2 Gb WD 4.0 Gb Floppy Drives: 3.5" Bus Type: PCI Extra Drives: TEAC CD-55 tray ROM 4x Mouse and style: Bus on COM1 modem: on COM2 Memory: 24 Megs Root Directory: hdc7 O/S on system: Windows 95 Kernal Version: 2.0.34 Sound Card: Drives CDROM - Sound Blaster Pro 16 compatibleGary
Date: Thu, 29 Oct 1998 17:53:29 +0100
From: Thierry Durandy,
thierry.durandy@art.alcatel.fr
Subject: Tie with the penguin logo
Do you know if I can find a tie with the Linux penguin logo on it? I could be interested in buying one to wear it and to show my opinion with keeping the suit.
Thierry
Date: Fri, 30 Oct 1998 17:00:16 EST
From: Ross, IceRaven1@aol.com
Subject: Cirrus Logic is the pits
Help me, I have a huge computer science project to hand in on Monday 11:00 GMT and my university won't let us use the UNIX boxes on the weekends. I have Linux but alas I have a Cirrus Logic 5446 PCI with 2MB and Xwindows can't hack it--it corrupts the screen. My mate bought a new card to fix this problem. There must be a cheaper sollution, patch, new server, whatever.
Also any quick help on how to set up a PPP conncection would be apreaciated,
Cheers to anyone who can help.
A newly converted Linux user,
Ross
Date: Sun, 4 Oct 1998 22:39:09 +0200
From: A.R. (Tom) Peters,
tom@tompth.xs4all.nl
Subject: Linux certification
I read your article in Linux Gazette 33 on a Linux Certification program with interest. However, I would like to point out (and I will not be the only one), that this issue was already raised by Phil Hughes in L.J. Nov.1997 p.10; since then, there has been a still-active discussion in http://www.linuxjournal.com/HyperNews/get/certification.html. Therefore, I am somewhat surprised to see this paper appear in Linux Gazette without reference to these discussions. Moreover, Robert Hart of Red Hat has been actively defining a RH certification program; see http://www.redhat.com/~hartr/ .
In principle, I sustain initiatives like these. I strongly disagree however, with Dan York's stress on the benefits for conference centers and publishers. Although I don't care if they make a lot of money out of it, I am very much afraid of the consequences: if something like this really catches on, only people who can afford the certification program will be taken seriously as Linux consultants or developers. Everyone else will be officially doomed to be an "amateur", disrespective of competence or contributions already made to the Linux movement. So I think we should NOT copy the expensive MSCE model, but keep Linux certification affordable.
--
Tom "thriving on chaos" Peters
Date: Sun, 4 Oct 1998 16:53:56 -0400
From: Dan York,
dyork@lodestar2.com
Subject: RE: Linux certification
Tom,
Many thanks for the pointers... I was not aware of the discussion
on the linuxjournal.com site and had, in fact, been quite unsuccessful
in finding such discussions on the web. Thank you.
Thank you for pointing out Robert Hart's site... yes, others have sent along that pointer as well. Maybe I missed it, but when I was going through Red Hat's site, I didn't see a link to his pages on certification. Thank you for sending the pointer... and I hope Red Hat and Caldera can unify their efforts. We'll see.
As far as your comments on the pricing, I understand your concerns. The struggle is to keep it affordable while also making it objective (which I would do through exams). In truth, Microsoft's MCSE program could cost only $600 (the price of the 6 exams), although in practice people spend much more for books and/or training classes.
Thanks for your feedback - and I look forward to whatever discussions evolve.
Regards,
Dan
Date: Sat, 3 Oct 1998 16:56:14 +0200
From: "David Andreas Alderud",
aaldv97@student.hv.se
Subject: Reb0l
Just thought I'd mention something everybody needs to know... Reb0l is no longer beta and is available from www.rebol.com Really nice, I've used Reb0l since late last year (On my Amiga though) and I'm really pleased, sure think it will run over every other script language.
Kind Regards,
Andreas Alderud.
Date: Fri, 2 Oct 1998 10:29:21 -0500 (CDT)
From:
mjhammel@graphics-muse.org
Subject: re: links between identical sections
Although I can't speak for other areas of the Gazette, the Graphics Muse can be searched using the Graphics Muse Web site. I have all the back issues online there with topical headings for the main articles in each issue. This feature just went live (online) last night, so it's brand new (which is why no one knew about it before :-).
Take a look at http://www.graphics-muse.org/linux.html and click on the "Muse" button. That will do it for you.
----
Michael J. Hammel,
The Graphics Muse
We've added those requested links to each of the regular columns now. Ellen Dahl did this good work for us. --Editor
Date: Fri, 2 Oct 1998 04:02:23 -0400
From: "Tim Gray",
timgray@geocities.com
Subject: Linux easy/not easy/not ready/ready YIKES!
Ok, I've noticed one very strong theme in every message I have ever read about Linux and how it won't be accepted as a desktop. Every message states in one way or another, "if they see a command prompt, they will panic". I am appalled at how IT professionals view users as idiots and morons. I refuse to call myself an IT professional because I help my users and clients use their software and don't "just fix it when they mess it up". A user can learn the command prompt quickly, and it's easier to teach than, "click on start,settings,controlpanel,system,bla bla bla bla...." than, "just type setupmodem and press enter" or whatever command or script you may like. I have started to move all my clients to Linux starting with the servers, saving them time and money. And I have a CEO that logs in as root and adds and removes users at one location. Users are much smarter than everyone gives them credit for and a command prompt doesn't affect them as if the devil just spoke from the speakers. If the IT departments around the world put 1/5 the effort into educating the users than complaining about them, then it would be a non-issue. As computer professionals, we are to keep things running and educate our users, not sit on the pillar looking down with the look of "what do you do to it now?"
As one last question, everyone says "I'll use Linux when it has a standard GUI"... What is a standard GUI? Windows doesn't have one, Linux is the closest thing to a standard GUI than anything else available.
----
Tim Gray
Date: Tue, 06 Oct 1998 06:56:51 -0400
From: Nathaniel Smith, slow@negia.net
Subject: Information on Linux
I wrote you on Article Ideas and told you that I thought you should write an article on how to use Linux for us (click and go people, who are computer dummies), and you were kind enough to publish it. Before I wrote you, I had already ordered 4 books (apparently the wrong ones, and had received two, they started out, "I will assume you already have a full working knowledge of Unix commands). I have had several kind souls, who have taken their time and energy, to point me in a direction that I can help myself, and that is all anyone can ask. Some have even tried to go even further and tried to help me with a hard drive problem that I have. I would like to see someone try that with the Windows crowd, you would most likely come up with an empty mail box. I think that says a lot about the type of people that uses Linux and I just want to thank you and everyone who has tried to help me, for I will try to help myself before asking for anymore help. I think that I have enough to keep me busy learning for quite a while.
thank you
Nathaniel
Date: Thu, 8 Oct 1998 18:44:33 -0400
From: keith, keithg@bway.net
Subject: suggestion for Linux security feature
I wonder if you can point me in the right direction to make a suggestion for a new "feature" of Linux which could further help to differentiate it in the marketplace, and which might really give it a LOT of exposure (good) in today's security-conscious press...
The security of computer information has been in the press a lot lately, detectability of "deleted" files on people's hard drives, "secret" files, cache files, cookies, etc. which are out of the purview of the typical (and maybe even the advanced!) user. People either think they've deleted things which they haven't really expunged, or their files are infiltrated, perhaps by a child (accidentally, of course!).
It seems to me quite possible to structure an OS like UNIX (and Linux in particular, since it is under development by so many gifted people) in such a way that all such files are created in a directory under the current user's ownership, in a knowable and findable place, so that:
A. only that user could access their own cache, cookies, pointer files, etc. I do not know how deleted files could be safeguarded in this way, unless it is simply to encrypt everything. Hmmm.;
B. these files - the whole lot of them - could be scrubbed, wiped, obliterated (that's why it's important for them to be in a known and findable place) by their owner, without impairing the function of the applications or the system, and without disturbing similar such files for other users.
C. it would be nice too if there were a way to prevent the copying of certain files, and that would include copying by backup programs (for example, I'm a Mac user and we use Retrospect to back up some of our Macs; there's a feature to suppress the backing up of a particular directory by having a special character (a "bullet", or optn-8) at the beginning or end of the directory name.) But if this could be an OS-level feature, it would be stronger.
If I'm user X, and I want to get rid of my computer, or get rid of everything that's mine on the computer, I should just be able to delete all of my data files (and burn them or wipe them or otherwise overwrite that area of the disk), which I can surely do today. But in addition, I should know where to go to do the same thing with whatever system level files might be out there, currently unbeknownst to me, and be able to expunge them also, without affecting anything for anyone else.
Who would work on such a thing as this? Who would I suggest this to? Of course, it's my idea. (c) Keith Gardner 1998. :) But if something like this could be set up, wouldn't it go a long way in the press, in corporate and government buying mind set, etc.?
I'm writing this very quickly, the idea really just came to me while reading the NY Times this morning with an article (in Circuits, 10/8/98) about computer security, and I am on my way out the door. I don't have time to give it much polish. But I hope the ideas are clear enough. Let me know what you think.
Thanks.
Keith Gardner
Date: Fri, 16 Oct 1998 15:41:25 -0500 (CDT)
From: Bret McGuire,
mersault@busboy.sped.ukans.edu
Subject: Availability of information for newbies
The October issue of Linux Gazette featured a number of mail messages from individuals seeking basic information on how to start up and run a useful Linux system. A common complaint among these individuals was that basic information was not readily available, leading to the rather humorous suggestion that anyone who operates a usable Linux system was somehow "born with this information". :)
This isn't the case. There are a number of locations on the Web which offer a great deal of information about the Linux operating system. The best starting point is probably still the Linux Documentation Project...
(or at least that's where I always go... I understand there are mirrors all over)
This site features HOWTO documents on nearly every topic you can imagine, along with current copies of the various Guides (everything from the Installation and Getting Started Guide thru The Linux Users' Guide thru The Linux Network Administrators' Guide, etc.). I suspect that this site either has the answer to your questions or has a link to someplace else that does. Definitely worth looking at...
----
Bret
Date: Mon, 19 Oct 1998 13:54:18 +0200
From: Jonas Erikson,
jonase@admin.kth.se
Subject: go go Network do or die!
My concern is that the free software alternative is going to its grave due to out-dated core bindings to the standard old UNIX core.
In comp.os.plan9 there are discussions like:
| Hasn't the coolness of Linux worn off? If you want true excitement with | how cool an OS is and the fun of pioneering again, how about cloning | Plan 9?
Later in the same thread:
| We need a new Linus to start writing a Plan 9 kernel. GNU's Hurd doesn't | go as far, as a cloned Plan 9 would.
And in other comp.os.* ... more...
I urge not to start all over again - but to modify that what is market recognized and stable. I think, unlike many other freeware enthusiasts, that there is a need for software infrastructure. A weak Linux would scatter a lot of good work and inspirations. For a new alternative it would take far too much time to reclaim the market confidence to freeware again.
I know that what I suggest, is far in terms of development in Linux and that Linux holds a legacy of strong infrastructure. But I don't know if Linux, can tackle the infrastructure requirements building up after the first Internet pioneering..
Users in the MS-world see ACL:s and sharing (thus only the image) capabilities as a condition for selecting system. Also the development trend is that of distributing services, not only inside corporations but also trading with services distributed via CORBA or DCOM. Also other not so heavy standards are emerging as P3P, and do require a more distributive approach.
If we look at sharing with supposed "advanced" like CODA and AFS capabilities in file systems, that is just the beginning. And I think only a symptom, of lacking structures inside UNIX. (CODA _is_ advanced in may aspects not issued here)
New Internet standards make UNIX applications handle more, and more security features not compatible with the system. Building walls in systems by not providing infrastructure is not good for freeware, it's not like Internet at all, not infrastructure.
The emerging operating system would be the most flexible in distributed security and compatible to old standards... And the idea to use a freeware alternative is to be ahead and in control.
Are we still?
So for the Linux Ext2fs kernel 2.3 ACL development: Do embed [domain][gid/uid][rights] for ACL-enteries!
Don't forget that:
Linux is like windows to the whole OS-arena but on the "open/free" OS arena.
And software is like infrastructure. - nothing but smaller differences
are necessary to gain market.
As roads they need to be compatible with most cars, but still improve.
Now some infrastructures are gradually being implemented that set new
standards to cars, it's a bad idea not to take advantage of these standards.
Jonas
Date: Sun, 25 Oct 1998 06:53:39 -0500
From: "Bill Parker",
bparker@dc.net
Subject: Compliments on a great issue
Great issue. It will take me some time to absorb even some of the information and good ideas presented here.
I particularly benefited from "Thoughts about Linux," by Jurgen Defurne and "DialMon: The Linux/Windows diald Monitor," by Mike Richardson. I have not had time to read the rest yet.
Thanks and best wishes,
Bill Parker
Date: Mon, 26 Oct 1998 16:16:37 -0800
From: Dave Stevens,
dstevens@mail.bulkley.net
Subject: Rant
October 17, 1998, Smithers, B.C.
There is a lot of criticism of Linux that goes more or less like this - "Well if it was so hot it
would cost something. Everything free is no good."
It isn't necessarily so and it just isn't so.
Copyright is a social vehicle for compensating creators of intellectual property. The copyright expires eventually. Then the benefit of the intellectual work can, if it is of lasting value, be used more widely and, in principle, at least, in perpetuity. This process and model are very familiar in other fields of intellectual endeavor but are new to computer programming. If we look at the body of english literature that fills our libraries and bookshelves, there is certainly no direct correspondence between copyright and quality. All of Shakespeare, to take a favorite of mine, is long out of copyright and is some of the best literature ever created. Or Mozart, or Dickens. You make the list.
The whole consumer software trip is too new for the copyright process and terms to have worked themselves out full term. The concept of computer software as intellectual work, potentially of a high calibre, is just too new for social understanding to be widespread. The idea that intellectual work might be contributed and protected in such a way as to enlarge the realm of the possible in the computer part of the public sphere certainly has a way to go in being got used to.
Does this mean that some of the criticism offered in superficial? To put it kindly, yes. The open source software community is collaboratively creating a standard for computer software below which any commercial vendor will fall at its peril. If you can have all this for free will you actually pay to get an inferior product? Maybe by accident. But not twice. The growth of acceptance of Linux is a step in the spread of the idea of a body of public domain imperative literature. Its quality is no more to be judged by its price than a Chopin waltz.
I would be happy to discuss any of these ideas with coherent correspondents, and invite both comment and criticism.
Dave Stevens
For your information:
The Lexmark 40 color printers do all setup/alignment from MS whatever OS.
Lexmark first told me they don't support Linux on their new 40 & 45 printers (all alignment functions are from software under MS something). But hey, the guys at Lexmark came through. They sent me a C program for aligning the printer.
It would be a good candidate to go into an archive (sunsite.unc.edu). I don't know the process for putting software into an archive so I am passing it on to you folks. I also am sending it to Grant Taylor gtaylor+pht@picante.com, who is listed as the custodian of the Printing-HOWTO. The model 40 printer is PostScript and works well.
User has problems with <B>, that is not visible in lynx.Well, this is not the problem of the page, but a problem of lynx configuration. At the bottom of the default lynx config file one can configure the colors for the display of the different tags as one wishes. It is very clear from the comments in the config file on how to do it. One can start lynx with an option lynx -f config.file if I remember right.
Tomas
Jan Jansta wrote...Here's what I've done. The exact line from my /etc/fstab:
I have permanent problem with mounting any vfat/dos filesystem with write permisions for all users on my Linux machine. I'm using Red Hat 5.1, kernel version 2.0.34 . Does someone know what's not working properly ?
/dev/hda1 /mnt/win95 vfat umask=000,auto 1 1The trick is in setting the umask. Caolan
I saw your letter to Linux Gazette and decided to drop you a few pointers.
Linux Documentation Project:
First, the Linux Documentation Project is your friend. Take a look around
the site http://sunsite.unc.edu/LDP/. The documents that you'll find most
valuable as a new Linux user are the "Installation and Getting Started
Guide" and the "The Linux Users' Guide". Both are available for download in
multiple formats. Descriptions and pointers are at
http://sunsite.unc.edu/LDP/ldp.html. If you really consider yourself (or
your curious friends) clueless, then I'd advise you to buy a ream of paper
and print the PDF version of the Linux User's Guide for casual reading.
Then get one of the easier distributions, back up your Win95 data, and give
Linux a whirl.
Linux Distributions:
I'd recommend Caldera http://www.caldera.com/ for casual non-programmers
that are comfortable with Win95 and just want to try Linux. Current
versions of Caldera come with the KDE desktop http://www.kde.org/. KDE
presents a familiar interface to Win95 users. Red Hat
http://www.redhat.com/ is very popular and also relatively easy but is
oriented more toward knowledgeable computer users. I'm not familiar enough
with SUSE http://www.suse.com/ to make a recommendation, although it's
supposed to be easy too. Debian http://www.debian.org/ and Slackware
http://www.slackware.org/ are considered by most to be for those who
already know how to install and use Linux. There are other distributions,
but these are the most popular.
Included Documentation:
Once you get Linux installed, fire up Midnight Commander from the command
line using 'mc'. This is an easy to use file manager that, despite its DOS
look & feel, is also powerful. Use it to take a look around the /usr/doc
directory for the wealth of documentation installed in any popular Linux
system. You'll be astounded at the amount of information available if
you're accustomed to the Win95 way of doing things. The HOWTO documents in
particular will be very useful to new users. HOWTOs are cookbook-style
documents written by Linux users who have taken the time to share the steps
they took to accomplish something in Linux. Perhaps if you use Linux for a
while, you'll have occasion to write a HOWTO of your own.
Manual Pages
If you see references to a command in Linux and would like to know more
about using it, chances are you'll find a comprehensive description of the
command and all its options in the associated Manual Page. Just type: 'man
command' at the command line, substituting the name of the command you're
interested in and you'll be presented with a summary of the syntax, usage,
and available options for the command. Many man pages also include examples
and references to related man pages and commands. To see how to use the
manual page system itself, just use 'man man'.
Mailing Lists and Newsgroups
Mailing lists and newsgroups provide a good way to find the answer to a
question you haven't been able to find the answer to in the extensive
documentation included with Linux or available from the LDP. Mailing lists
are generally archived and the archives will probably be able to answer
your question. If not, post a note asking for a pointer to the
documentation and you'll probably get several good answers. If the problem
is simple enough, you'll probably get an explanation too. I've found
pointers to comprehensive documentation to be more valuable in the long run
though. Often, understanding the solution to one problem allows you to
solve other problems later. When subscribing to a mailing list or
newsgroup, try to find one that's specific to the distribution you use.
Most things are the same across distributions, but there are enough small
differences that new users would be best served by getting help that's
specific to their distribution.
One more thing; be prepared to do lots of reading. ;-)
--
Anthony E. Greene
To begin with, I often like to browse and/or reference present and past issues of the Linux Gazette. But, since I'm not always connected to the Internet, and even when I am, I hate waiting for a page to download; I mirror it locally, both at home, and at work.
On occasion I have found myself grepping the TWDT files for specific references to various topics, commands, packages, or whatever. But, just a plain grep of lg/issue??/issue??.html, will only show references in all but the first 8 issues. So, I made some minor changes in lg/issue01to08, and put an alias (command) in ~/.bashrc, to allow easy scanning of ALL issues.
First, the changes:
cd ~/html/lg/issue01to08 ln linux_gazette.html lg_issue1.html ln linux_gazette.aug.html lg_issue2.html ln linux_gazette.sep.html lg_issue3.html ln linux_gazette.oct.html lg_issue4.html ln linux_gazette.nov.html lg_issue5.htmlNow the command declaration (for bash):
lgfind () { command grep -i ""$@"" ~/html/lg/issue01to08/lg_issue?.html ~/html/lg/issue??/issue??.html | more ; }The same declaration in C-shell (csh):
alias lgfind 'grep -i ""\!*"" ~/html/lg/issue01to08/lg_issue?.html ~/html/lg/issue??/issue??.html | more'I suppose I could have used "linux_gazette*" in my grep, but that would have put the resulting output out of order. Besides, these links allow the grep to show which issue number a match is found in.
And I suppose I could also have created either soft or hard links to ALL of the TWDT files in another directory. But I would then have to go there and add another link, every time a new issue came out.
Using this is simple, just:
lgfind <string>and obvious to most experienced UNIX users, I quote the string if it contains spaces. Also, the string can be a regex expression. You may have noticed the "-i" -- I don't like having to remember the case of the characters I'm looking for.
Once I have the output of lgfind, I point my browser to another html page that I have generated, that contains just links to all of the TWDT files. I will attach that page to this message. You can either add it to the base files, publish it, or whatever TYO. ;-) I put it in the directory that contains your `lg' directory.
I hope this helps someone else, too.
Ray Marshall
PS: I agree with your decision to use "TWDT". It can be read in whatever way one wishes, including very inoffensively. Wise choice.
From: Jan Jansta
I have a permanent problem with mounting any vfat/dos filesystem with write permissions for all users >on my Linux machine. I'm using Red Hat 5.1, kernel version 2.0.34 Does someone know what's not working properly ?
I had somewhat the same problem. What I did was to put this in my
/etc/fstab: /dev/hda1 /dos vfat user,noauto 0 0I don't always want my /dos partition mounted, because I don't want its files cluttering up my db for locating files. But making it a user partition means that anyone can mount and use it.
Good luck,
Nick
Secure Mounting for DOS Partitions:
In order to open up permissions on your DOS partitions in a secure way, do the following:
Note: in the samples below, the dos usrid (63) and grpid(63) were selected so they wouldn't duplicate any other usrid or grpid in /etc/passwd or /etc/group.
Also, this solution works with Red Hat 5.1, you may have to adjust it slightly if you are using a different distribution.
1) Make a dos user who can't log in by adding the following line to /etc/passwd: dos:*:63:63:MSDOS Accessor:/dos:
2) Make a dos group and add users to the dos group. In the following example, root and ejy are in the dos group. To do this, add a line like the following to /etc/group: dos::63:root,ejy
3) Add the following line (changed to suit your system) to
/etc/fstab: /dev/hda1 /C vfat uid=63,gid=63,umask=007 0 0Of course, you have to locate your DOS partitions in the first place. This is done by issuing the following commands as 'root':
/sbin/fdisk -l df cat /etc/fstabThe `fdisk -l` command lists all available devices. `df` shows which devices are mounted and how much is on them. And /etc/fstab lists all mountable devices. The devices remaining are extended partitions, a kind of a partition envelope, which you don't want to mount. And the partition's allocated to other operating systems which you may want to mount.
4) Create a mount point for your DOS disk by issuing the following commands as root: mkdir /C chown dos:dos /C
With this setup, the C: drive is mounted at boot time to /C. Only root and ejy can read and write to it. Note that vfat in /etc/fstab works for vfat16 (and vfat32 natively for Linux 2.0.34 and above).
Enjoy...
In issue 33 of the Linux Gazette you wrote:When people are talking about printer drivers for Linux, they are mostly referring to a piece of code that enables the "Ghostscript" program to produce output on your printer.
I have a Canon BJC-250 color printer. I have heard many people say that the BJC-600 printer driver will let me print in color. But I have not heard anyone say where I can get such a driver. I have looked everywhere but where it is. Can you help me?
Ghostscript is an interpreter of the Postscript page-description language. In the Unix world, it is kind of a lingua franca of talking to a printer. A lot of programs can produce Postscript output.
More expensive printers support Postscript in hardware, other printers need Ghostscript with a driver for that particular printer compiled in.
Invoke Ghostscript as "gs -?" to see a list of all the printers for which support is compiled in. If your printer is not in the list, use a driver for a printer from the same family. Otherwise you might have to compile GhostScript with another driver.
The Ghostscript 5.1 that I'm using (Debian distro) is compiled with the bjc600 driver.
Roland
In issue 33 of the Linux Gazette you wrote:To use a Plug-and-Play device under Linux, you have to configure it. For that, you can use isapnptools package. It will probably be included with your distribution.
I have already spent hours trying to fix my Supra336 PnP internal modem and my HP DeskJet 720C under Linux! The result is always the same, no communication with teh modem and no page printed on the HP printer! Could someone help me, I am close to abandon!
Log in as root, and execute the command "pnpdump >isapnp.conf" Now edit this file to choose sensible values for the parameters the modem requires. Read the isapnp.conf man page. You might want to do "cat /proc/interrupts", "cat /proc/dma" and "cat /proc/ioports" to see which interrupts, DMA channels and I/O addresses are already in use. Once you're finished. copy the isapnp.conf file to /etc (as root). You can now configure the card by issuing the command "isapnp /etc/isapnp.conf" as root.
This probably must be done before the serial ports are configured. Look at the init(8) manpage, and see where the serial ports are configured in the system initialization scripts. Make sure that isapnp is called before the serial ports are configured.
If the modem is an internal one, you might have to disable one of the serial ports in your BIOS, so the modem can use in's address and interrupt.
Now, about the printer, AFIAK all HP *20 models are Windows-only printers. They use the host computer's CPU to perform all kinds of calculations that are normally done by the printer hardware. Therefore it needs a driver. Since HP doesn't release programming info on these devices, there will probably never be Linux drivers for these printers.
You should avoid this kind of brain-dead hardware (mostly referred to as "winprinters", or "winmodems").
Hope this helps :-)
Roland
If you want the best of both worlds of Java and Ada, write applets targeted to the JVM in Ada! See these URLs for further info:
http://www.adahome.com/Resources/Ada_Java.html http://www.buffnet.net/~westley/AdaJava/
--
Terry J. Westley
You asked:Most printing on Linux is handled through the use of the Ghostscript drivers. Ghostscript takes postscript input directed to it via the lpr command and converts it to the raw data streams that a particular output device can handle. Ghostscript can handle devices like printers but can also be used to display postscript files to your display (via the ghostview program).
I have a Canon BJC-250 color printer. I have heard many people say that the BJC-600 printer driver will let me print in color. But I have not heard anyone say where I can get such a driver. I have looked everywhere but where it is. Can you help me?
To see if you have ghostscript installed, type the following:
% gs -v"gs" is the command name for the ghostscript program (yes, it's really a program that has a bunch of output drivers compiled into it). The -v option asks it to print version information. If you have gs installed you'll see something like this:
Aladdin Ghostscript 4.03 (1996-9-23) Copyright (C) 1996 Aladdin Enterprises, Menlo Park, CA. All rights reserved. Usage: gs [switches] [file1.ps file2.ps ...] Most frequently used switches: (you can use # in place of =) -dNOPAUSE no pause after page | -q `quiet', fewer messages -g<width>x<height> page size in pixels | -r<res> pixels/inch resolution -sDEVICE=<devname> select device | -c quit (as the last switch) | exit after last file -sOutputFile=<file> select output file: - for stdout, |command for pipe, embed %d or %ld for page # Input formats: PostScript PostScriptLevel1 PostScriptLevel2 PDF Available devices: x11 x11alpha x11cmyk x11mono deskjet djet500 laserjet ljetplus ljet2p ljet3 ljet4 cdeskjet cdjcolor cdjmono cdj550 pj pjxl pjxl300 bj10e bj200 bjc600 bjc800 faxg3 faxg32d faxg4 pcxmono pcxgray pcx16 pcx256 pcx24b pbm pbmraw pgm pgmraw pgnm pgnmraw pnm pnmraw ppm ppmraw tiffcrle tiffg3 tiffg32d tiffg4 tifflzw tiffpack tiff12nc tiff24nc psmono bit bitrgb bitcmyk pngmono pnggray png16 png256 png16m pdfwrite nullpage Search path: . : /usr/openwin/lib/X11/fonts/Type1 : /usr/openwin/lib/X11/fonts/Type3 : /opt/AEgs/share/ghostscript/4.02 : /opt/AEgs/share/ghostscript/fonts For more information, see /opt/AEgs/share/ghostscript/4.02/doc/use.txt. Report bugs to ghost@aladdin.com; use the form in new-user.txt.(the dashed lines are just to delimit the output from my email message)
This output comes from a version of ghostscript built for a Solaris system by someone other than myself. I don't know if this is the default set of devices you'll see on a Linux distribution or not.
The "available devices" say which devices you can use with gs. In this case the bubble jet 250 is not specifically listed (I suspect it would say bjc250, but I could be wrong), so I would (if I were using that particular printer) have to get the source and read the devices.txt file to find out if this printer is supported, either by its own driver or by one of the other drivers (perhaps the bjc600 supports it, for example).
This is the short explanation. To summarize, you'll need to familiarize yourself with Ghostscript and using lpr. If you're lucky and this printer is commonly supported by the various Linux distributions then you may already have this printer configured in the ghostscript you have installed on your box.
For information on Ghostscript you'll need to look at the Ghostscript FAQ at http://www.cs.wisc.edu/~ghost/gsfaq.html. Note that there are two versions of Ghostscript: Aladdin's and the GNU version. Aladdin's is a commercial product but it's free for personal use. If you're not planning on redistributing it then I recommend the Aladdin version.
Okay, that's all the good news. I just checked the devices list at http://www.cs.wisc.edu/~ghost/aladdin/devices.html and it doesn't list the Canon Color Bubble Jet 250. If this printer is supported it's either with a newer, unlisted driver or by one of the other drivers. You'll probably need to check the .txt files that come with the source, find the author of the Color Bubble Jet drivers and drop them a line to see if they know if this printer will work with one of the existing drivers.
Hope that helps point you in the right direction.
Michael J. Hammel, The Graphics Muse
Jan,
My /etc/fstab contains this line:
/dev/hda4 /f: vfat defaults,umask=007,gid=101 1 1This mounts my dos directory at /f: ( to match when I boot NT ) it allows root, or anyone in the group 101 to read or write the directory. I set up the 101 group so I could say only people in that group could write to /f:
To allow everyone change it to defaults,umask=000
Scott Carlson
In order to get SMB printing to work under Red Hat Linux 5.1 with my username (which has a single space in it), I made the following addition to the Red Hat print filter "smbprint", located in: /usr/lib/rhs/rhs-printfilters/smbprint
USER="`echo $usercmd | awk '{printf "%s %s", $2, $3}'`"%$password usercmd=""(The above lines were inserted just prior to the last line in the script, which on my system was):
(echo "print -"; cat) | /usr/bin/smbclient "$share" $password -E ${hostip:+-I} $hostip -N -P $usercmd 2>/dev/nullThis has the effect of setting the USER variable to "User Name"%password, where User Name is the name of the user as passed in to the script in the $usercmd varible. AWK is used to strip out the leading "-U" supplied as part of $usercmd somewhere up the command chain.
This solution only works for usernames with a single space in them. A more complex and full-featured solution would deal with no spaces or multiple spaces, either way. In any case, I feel Red Hat should find a general solution to this and incorporate it in their next release.
Warren
P.S. Thanks for a great forum for sharing tips and tricks
for Linux. BTW, does Red Hat read these tips? I'd
appreciate it if someone would submit this bug to them for
fixing.
Generalized fix for SMB printing-- usernames w/spaces
Date: Wed, 07 Oct 98 06:38:02 -0800
From: vwdowns@bigfoot.com
I wrote you earlier about a bug in Red Hat 5.1's /usr/lib/rhs/rhs-printfilters/smbprint
I later realized a simple generalized solution, by looking at the source code in more detail. The lines I added before can be replaced with:
export USER="$user"%$password usercmd=""(Just prior to the last line which calls smbclient).
For a more full-featured fix, simply modify the setting of $usercmd to: 1. Replace references to $usercmd with references to $USER.
2. $USER should be set/exported conditionally as $usercmd is at present.
3. $usercmd should be removed entirely from usage.
The only reliable way to pass a username/password to smbclient is via the USER environment variable.
1. The environment variable will not be seen on the cmd line by someone running ps, thus not exposing your password accidentally.
2. User names/passwords passed on the command line cannot contain spaces. If you embed them in quotes, smbclient keeps the quotes instead of trimming them off, causing username/password mismatch on the server. If you leave off the quotes, normal command-line parsing separates the username/password into separate parameters, and only the first word of each will get used.
Anyone using Red Hat print-filters will want to fix this, just in case they ever decide to set up SMB printing and are stuck with spaces in their username/password (as I am).
Warren E. Downs
we all use Netscape every now and then. Most people won't use it as mailreader since it is too bloated, and the UNIX mailreaders are generally much better.
Nevertheless, Netscape seems to create a directory nsmail in the user's home directory every time it starts and doesn't find it, even if mail is not used. This is annoying. Here's a trick which doesn't make this directory go away, but at least makes it invisible.
I didn't find a GUI equivalent to change this setting so you have to do the following:
Edit the file ~/.netscape/preferences.js and change all occurences of 'nsmail' to '.netscape'. The important thing here is, of course, the leading dot before 'netscape'.
Regards,
hjb
I saw your request for help in the Linux Gazette re Cobol. I've been using AcuCobol for 2 years under Linux and I strongly recommend the product and the company.
I don't know the cost of the compiler because my company bought it - but email them and ask for a student copy - they can only refuse... They have a full development environment called 'AcuBench' which currently only runs only under Windows.
The amazing thing about AcuCobol is that programs compiled on one platform will run totally unchanged on another machine - I tend to develop under Windows but install at clients sites on Linux. I hope this has been helpful.
Regards
John Leach
This has probably come up before, but the "more fun with pipes" thing in issue 33 reminded me of it.
Have a different signature appear in your emails every time you send one.
Create a subdrectory in your home called .signatures and copy your .signature file into it under a visible name. delete your .signature file and create a pipe in its place using
mkfifo .signatureCreate a script which simply "cat"s each of the files in the .signatures directory out to the .signature pipe:
#!/bin/sh while true do for SIGNATURE in ${HOME}/.signatures/* do # Cat each file out to the .signature and throw away any errors. cat ${SIGNATURE} > ${HOME}/.signature 2> /dev/null # This sleep seems to be required for Netscape to work properly # I think buffering on the filesystem can cause multiple signatures # to be read otherwise. I think the sleep allows Netscape to see # the End Of File. sleep 1 done doneHave this script kick off in the background every time you log in to the system in your profile or xsession. Add more entries to the .signatures directory and they automatically get used in your emails.
Issues and problems:
One issue might be blocking on the pipe. If there is no process feeding
signature files down the pipe, any programs which open the pipe can
appear to hang until something is written.
--
Colin Smith
If you have installed Red Hat version 5.0, you will have come across this problem. It will not take you long to realise that the backspace key (by this I mean the key above the ENTER key) and the delete key (by this I mean the key below the INSERT key and to the left of the END key) behaves differently on the console than on xterm in X-windows.
This is extremely irritating if like me you work in both the text-only console and xterm on X-windows. I set about to make sure that the behaviour is the same on both of them. In other words I want them to be standardise.
My solution is to make the backspace, delete and the pageup and pagedown key to behave exactly like they do in the text-only console.
The literature to do this is available on the web, however here I shall show those who have not done this yet, the steps needed to acheive this. A word of warning! This is dangerous. You can potentially stuff things up very very badly. In other words you must do this extremely carefully (and make lots of backups).
For your information I included the links below where you may obtain more details about this matter.
http://www.best.com/~aturner//RedHat-FAQ/ http://www.ibbnet.nl/~anne/keyboard.html
Okay now for the step by step instruction to fix the problem
Step one
* * * is to create a directory to store the original files
ksiew > mkdir original-terminfo-file ksiew > cd original-terminfo-file/ original-terminfo-file > pwd /home/ksiew/original-terminfo-file
Step two
* * * is to save the original copy of the xterm terminfo file
original-terminfo-file > locate xterm | grep terminfo | grep x/xterm /usr/lib/terminfo/x/xterm /usr/lib/terminfo/x/xterm-bold /usr/lib/terminfo/x/xterm-color /usr/lib/terminfo/x/xterm-nic /usr/lib/terminfo/x/xterm-pcolor /usr/lib/terminfo/x/xterm-sun /usr/lib/terminfo/x/xterms /usr/lib/terminfo/x/xterms-sun original-terminfo-file > cp /usr/lib/terminfo/x/xterm xterm.original original-terminfo-file > ls -al total 5 drwxrwxr-x 2 ksiew ksiew 1024 Oct 18 15:35 . drwxr-xr-x 24 ksiew ksiew 2048 Oct 18 15:31 .. -rw-rw-r-- 1 ksiew ksiew 1380 Oct 18 15:35 xterm.originalStep three
original-terminfo-file > infocmp xterm > xterm original-terminfo-file > less ./xterm # Reconstructed via infocmp from file: /usr/lib/terminfo/x/xterm xterm|vs100|xterm terminal emulator (X11R6 Window System), am, km, mir, msgr, xenl, xon, cols#80, it#8, lines#65, acsc=``aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~..--++\ 054\054hhII00, bel=^G, bold=\E[1m, clear=\E[H\E[2J, cr=^M, csr=\E[%i%p1%d;%p2%dr, cub=\E[%p1%dD, cub1=^H, cud=\E[%p1%dB, cud1=^J, cuf=\E[%p1%dC, cuf1=\E[C, cup=\E[%i%p1%d;%p2%dH, cuu=\E[%p1%dA, cuu1=\E[A, dch=\E[%p1%dP, dch1=\E[P, dl=\E[%p1%dM, dl1=\E[M, ed=\E[J, el=\E[K, enacs=\E(B\E)0, home=\E[H, ht=^I, ich=\E[%p1%d@, ich1=\E[@, il=\E[%p1%dL, il1=\E[L, ind=^J, is2=\E[r\E[m\E[2J\E[H\E[?7h\E[?1;3;4;6l\E[4l, kbs=^H, kcub1=\EOD, kcud1=\EOB, kcuf1=\EOC, kcuu1=\EOA, kend=\EOe, kent=\EOM, kf1=\E[11~, kf10=\E[21~, kf11=\E[23~, kf12=\E[24~, kf2=\E[12~, kf3=\E[13~, kf4=\E[14~, kf5=\E[15~, kf6=\E[17~, kf7=\E[18~, kf8=\E[19~, kf9=\E[20~, khome=\EO\200, kich1=\E[2~, kmous=\E[M, knp=\E[6~, kpp=\E[5~, rc=\E8, rev=\E[7m, ri=\EM, rmacs=^O, rmam=\E[?7l, rmcup=\E[2J\E[?47l\E8, rmir=\E[4l, rmkx=\E[?1l\E>, rmso=\E[m, rmul=\E[m, rs1=^O, rs2=\E[r\E[m\E[2J\E[H\E[?7h\E[?1;3;4;6l\E[4l\E<, sc=\E7, sgr0=\E[m, smacs=^N, smam=\E[?7h, smcup=\E7\E[?47h, smir=\E[4h, smkx=\E[?1h\E=, smso=\E[7m, smul=\E[4m, tbc=\E[3k, u6=\E[%i%d;%dR, u7=\E[6n, u8=\E[?1;2c, u9=\E[c,Step four
Change from
kbs=^H, kcub1=\EOD, kcud1=\EOB, kcuf1=\EOC, kcuu1=\EOA, kend=\EOe, kent=\EOM,to
kbs=\177, kcub1=\EOD, kcud1=\EOB, kcuf1=\EOC, kcuu1=\EOA, kdch1=\E[3~, kend=\EOe, kent=\EOM,Step five
original-terminfo-file > mkdir ~/.terminfo original-terminfo-file > export TERMINFO=~/.terminfoIf you are using tcsh, type instead
original-terminfo-file > setenv TERMINFO ~/.terminfo original-terminfo-file > export TERM=xtermIf you are using tcsh, type instead
original-terminfo-file > setenv TERM xterm original-terminfo-file > su password: opensesame #| tic xtermStep six
#| cd ~/.terminfo/x/ #| cp xterm /usr/lib/terminfo/x/xterm #| cd ~ #| rm -rf .terminfo #| exit ksiew>Step seven * * * is to logoff and log back on (this is to get rid of the TERMINFO variable) and change the .Xdefaults file
ksiew> logout login: ksiew password: opensesame ksiew> less .XdefaultsOutput from less
xterm*VT100.Translations: #override\n\ <KeyPress>gt;Prior : scroll-back(1,page)\n\ <KeyPress>gt;Next : scroll-forw(1,page) nxterm*VT100.Translations: #override\n\ <KeyPress>gt;Prior : scroll-back(1,page)\n\ <KeyPress>gt;Next : scroll-forw(1,page)to the following lines
xterm*VT100.Translations: #override\n\ <KeyPress>gt;Prior : string("\033[5~")\n\ <KeyPress>gt;Next : string("\033[6~") nxterm*VT100.Translations: #override\n\ <KeyPress>gt;Prior : string("\033[5~")\n\ <KeyPress>gt;Next : string("\033[6~") *VT100.Translations: #override \ <Key>gt;BackSpace: string(0x7F)\n\ <Key>gt;Delete: string("\033[3~")\n\ <Key>gt;Home: string("\033[1~")\n\ <Key>gt;End: string("\033[4~") *ttyModes: erase ^?That's it! Save the .Xdefaults file and now you can start X-windows and the backspace key, the delete key, the pageup key and the pagedown key will work just like in the text-only console window.
Steven
If you have ever used zgrep on gzip-textfiles then you would have realise what a wonderful it is. The program zgrep allows you to grep a textfile even if the text file is compressed in gzip format. Not only that it can also grep a uncompress textfile. For example if you have the following directory
testing > ls -al total 2086 drwxrwxr-x 2 ksiew ksiew 1024 Oct 18 11:07 . drwxr-xr-x 24 ksiew ksiew 2048 Oct 18 11:00 .. -rwxrwxr-x 1 ksiew ksiew 1363115 Oct 18 11:01 cortes.txt -rwxrwxr-x 1 ksiew ksiew 172860 Oct 18 11:01 lost_world_10.txt.gz -rwxrwxr-x 1 ksiew ksiew 582867 Oct 18 11:00 moon10a.txtThen if you are looking for the word "haste",
testing > zgrep -l haste * cortes.txt lost_world_10.txt.gz moon10a.txtTells you that "haste" is in all three files.
Now if you compress a textfile using the famous bzip2 compress program, you have a problem.
testing > bzip2 cortes.txt testing > ls -al total 1098 drwxrwxr-x 2 ksiew ksiew 1024 Oct 18 11:12 . drwxr-xr-x 24 ksiew ksiew 2048 Oct 18 11:12 .. -rwxrwxr-x 1 ksiew ksiew 355431 Oct 18 11:01 cortes.txt.bz2 -rwxrwxr-x 1 ksiew ksiew 172860 Oct 18 11:01 lost_world_10.txt.gz -rwxrwxr-x 1 ksiew ksiew 582867 Oct 18 11:00 moon10a.txt testing > zgrep -l haste * lost_world_10.txt.gz moon10a.txtWhat happen now is that zgrep no longer recognise the file cortes.txt.bz2 as a compress text file.
What we need is a new program bzgrep which can recognise bzip2 compress text files.
The best way to create bzgrep file is to modify the existing zgrep file.
testing > locate zgrep /usr/bin/zgrep /usr/man/man1/zgrep.1 testing > su password: opensesame #| cp /usr/bin/zgrep /usr/local/bin/bzgrepThe bzgrep file is a copy of zgrep file can contain this text.
We cannot change the last few lines to the following
res=0 for i do if test $list -eq 1; then bzip2 -cdf "$i" | $grep $opt "$pat" > /dev/null && echo $i r=$? elif test $# -eq 1 -o $silent -eq 1; then bzip2 -cdf "$i" | $grep $opt "$pat" r=$? else bzip2 -cdf "$i" | $grep $opt "$pat" | sed "s|^|${i}:|" r=$? fi test "$r" -ne 0 && res="$r" done exit $res
Now the bzgrep file is a program that will be able to grep bzip2 compressed textfiles. BUT there is a problem.
bzgrep program WILL NOT recognise ordinary textfiles or gzip compress textfiles. This is a major problem! It means you have to compress all your textfiles with bzip2 in order to use bzgrep program.
Luckily there is always a solution in Linux. All we have to do is alter the program to be more choosy on which decompression program to use. ie. Do it uses gzip -cdfq or bzip2 -cdf
Now change the last few lines again to resemble this
res=0 for i do case "$i" in *.bz2 ) if test $list -eq 1; then bzip2 -cdf "$i" | $grep $opt "$pat" > /dev/null && echo $i r=$? elif test $# -eq 1 -o $silent -eq 1; then bzip2 -cdf "$i" | $grep $opt "$pat" r=$? else bzip2 -cdf "$i" | $grep $opt "$pat" | sed "s|^|${i}:|" r=$? fi ;; * ) if test $list -eq 1; then gzip -cdfq "$i" | $grep $opt "$pat" > /dev/null && echo $i r=$? elif test $# -eq 1 -o $silent -eq 1; then gzip -cdfq "$i" | $grep $opt "$pat" r=$? else gzip -cdfq "$i" | $grep $opt "$pat" | sed "s|^|${i}:|" r=$? fi ;; esac test "$r" -ne 0 && res="$r" done exit $resFinally, this is the contents of a working bzgrep program.
Steve
In a previous message, dino jose says:Actually, you don't run Linux on the PalmPilot itself (although there is a project to do so - I don't know much about that however). You run Linux on your PC and transfer data files between the Linux system and the Pilot. You still run the same programs you normally would *on* the PalmPilot - it's just that you can transfer these programs and their data file to the Pilot using tools on Linux.
Hi... Mike, I read your article about the Linux in palm pilot.Its very intersting.Iam kind of new in LINUX platform! Because Iam so curious about Linux. I bought a palm pilot 111 the new version of palm with 2meg of memory.the main problem is, I don't know where to get the Linux operating system that it runs on palm pilot 111 the newer version. what about the HOW TO LINUX DOCUMENTATION FROM from its official site? Once I get this software do I run this in Linux operating system then transfer this to palm 111? Iam kind novice in Linux. If you could help me.I would gladly appreciated. Thanks a lot....
Don't let using Linux confuse you. You use Linux in the same way you use Microsoft Windows - it runs on your PC to do word processing or spreadsheets or whatever. You then pass data files back and forth the the Pilot using special tools.
If you want to try out a program that helps transfer files back and forth you can try my XNotesPlus. Its a sticky notes program that will allow you do backups of your Pilot to your local hard disk and will transfer the Address database from the Pilot to be used in doing some simple printing of envelopes. You can download the program from http://www.graphics-muse.org/xnotes/xnotes.html. You will also need to get the PilotLink software that I described in the article you read. XNotesPlus uses PilotLink to do the actual data transfers to and from the Pilot.
Hope this helps.
Michael J. Hammel, The Graphics Muse
Some people I know went nuts trying to install Acrobat Reader 3.01 as a helper app in Netscape, as shipped with Red Hat Linux 5.1. Here's how I've done it:
1. Download Acrobat Reader 3.01 from ftp.adobe.com. Let the installer script install the whole thing under /usr/local/Acrobat3.
2. Create the following shell script: /usr/local/Acrobat3/bin/nsacroread
#!/bin/sh unset LD_PRELOAD exec /usr/local/Acrobat3/bin/acroread $* >$HOME/.nsacroread-errors 2>&1
3. Don't forget to make this script executable:
# chmod 755 /usr/local/Acrobat3/bin/nsacroread4. If the directory /usr/local/lib/netscape doesn't already exist, create it.
5. Copy (exactly) the following two files into this directory.
mailcap:
#mailcap entry added by Netscape Helper application/pdf;/usr/local/Acrobat3/bin/nsacroread -tempFile %smime.types:
#--Netscape Communications Corporation MIME Information #Do not delete the above line. It is used to identify the file type. # #mime types added by Netscape Helper type=application/pdf \ desc="Acrobat Reader" \ exts="pdf"
Note: You can do without the last two steps and instead configure the helper apps with the Edit >> Preferences menu of Netscape. This will create similar .mailcap and .mime.types files in the user's home dir. But IMHO the first method is best because this way you can configure Acrobat Reader for all users at once.
Cheers,
Louis-Philippe Brais
1. I was having some real problems with Netscape (3.04 Gold) the other day. No matter what I did, I could not get the helpers to work. Somewhere in the back of my mind, I knew that they had worked in the past, but I couldn't see anything that I'd changed. A few messages on various newsgroups turned on the lights: I had upgraded my Bash to 2.0.0--and this version has a bug in it. Expressions of the form ((..) .. ) are interpreted as arithmetic expressions, rather than nested sub-shells. Upgrading to 2.02.1(1) was almost painless and fixed the Netscape problem.
To get 2.02.1(1) go to the gnu site (www.gnu.org) and follow the links to the software sections. The new software should compile out of the box (it did for me). One problem I had was that the install script put the new binaries in /usr/local/bin, and since I had my old versions in /usr/bin they weren't found. A quick mv solved that.
2. For a number of years I've been struggling trying to read the results of color-ls on my xterm screens. A number of the colors (green and blue) were just too bright to read. I didn't want to turn down the brightness of my monitor...so I mostly just squinted. For some reason I was looking at the XTerm.ad file, and noticed that the colors could be adjusted! The XTerm.ad file should be located in /usr/lib/X11/app-defaults (or something similar). It is read each time a new xterm start up and sets various options. If you look near the end of this file you'll see a number of definitions for the VT100 colors. I changed:
*VT100*color2: green3to
*VT100*color2: green4and
*VT100*color6: cyan3to
*VT100*color6: cyan4Like magic, the colors are darkened and I can read the results. If you don't want to fool with your global default file, you could also just add the entries to your ~/.Xresources file.
--
Bob van der Poel
I had a S3 Virge/DX, and couldn't get it working well in XFree. This made me very mad since there is a specific XFREE_S3V (S3 virge server).
I used a borrowed Xaccel, but it made me feel guilty real quick. :) So I decided that I need to get XFree configured well, and then ditch Xaccel. I found that xfree86config can not be well used to configure a Virge.
Here are the modelines I use for a mid-range 17 inch monitor @ 16bpp using the SVGA server. *WARNING* If this blows up your monitor/card, It's not my fault, although it shouldn't.
Modeline "640x480" 31.5 640 680 720 864 480 488 491 521 ModeLine "640x480" 31.5 640 656 720 840 480 481 484 500 -HSync -VSync Modeline "640x400" 36 640 696 752 832 480 481 484 509 -HSync -VSync Modeline "800x600" 40 800 840 968 1056 600 601 605 628 +hsync +vsync Modeline "800x600" 50 800 856 976 1040 600 637 643 666 +hsync +vsync Modeline "800x600" 60.75 800 864 928 1088 600 616 621 657 -HSync -VSync # Modeline "1024x768" 85.00 1024 1032 1152 1360 768 784 787 823 Modeline "1024x768" 85.00 1024 1052 1172 1320 768 780 783 803 # Modeline "1152x864" 85.00 1152 1240 1324 1552 864 864 876 908 Modeline "1152x864" 85.00 1152 1184 1268 1452 864 880 892 900This cured me of using Xaccel, and should cure your S3 Virge blues. P.S. A S3 Virge can go up to 1600x1000?
Andy
Contents: |
The December issue of Linux Journal will be hitting the newsstands November 6. The focus of this issue is System Administration. We have an interview with Linus Torvalds and an article about the 2.2 kernel. We also have articles on X administration and performance monitoring tools. Check out the Table of Contents at http://www.linuxjournal.com/issue56/index.html. To subscribe to Linux Journal, go to http://www.linuxjournal.com/ljsubsorder.html.
Check out this cool article in TIME!
http://cgi.pathfinder.com/time/magazine/1998/dom/981026/technology.the_mighty_f1a.html.
Dr. Bertrand Meyer, designer of the Eiffel programming language, was in Seattle to give his one day seminar on "Design by Contract". The purpose of this workstation is teach software engineers this unique object-oriented development method which emphasizes building reliable sytems through well-defined specifications and communication between the different parties to the system.
Talking to Dr. Meyer by phone, I asked him how Eiffel was better than Java. He gave me three reasons:
It is now possible to boot the Corel NetWinder Computer with Debian GNU/Linux, thanks to the work of Jim Pick and the other team members of the Debian Arm port. A disk image with instructions on how to use it is available from ftp://ftp.jimpick.com/pub/debian/netwinder/bootable-image/
A kernel package of the new ELF kernel (some notes are available at http://www.netwinder.org/~rahphs/knotes.html)
This alleviates the need for the chroot environment that previous development work was being conducted in, and allows work to progress even faster than before. This will also allow more people to join in the development effort easily.
Open Letter from Corel to Debian
Date: Thu, 29 Oct 1998 10:33:43 -0500
Software in the Public Interest, Inc. is pleased to announce its new web
pages. They can be found at
http://www.spi-inc.org/. SPI is a non profit
organization that was founded to help projects in developing software for
the community. Several projects are currently being supported through
monetary and hardware donations that have been made to SPI. As SPI is a
non profit organization, all donations to it and the projects it supports
are tax deductible.
Projects that are affiliated with and receive support from SPI are:
For more information:
SPI Press Contact, press@spi-inc.org
SPI homepage: http://www.spi-inc.org/
The leading printed Amiga magazine in Sweden, AmigaInfo, is starting a daily news section in Swedish for Amiga and Linux news.
AmigaInfo will also start a large Linux section (about 25 pages to start with) in the upcoming issue.
http://www.xfiles.se/amigainfo/
A student at UCLA, and several of the Linux users there in the dorms claim they are experiencing severe discrimination. The whole story is at http://www.nerdherd.net/oppression/. Take a look!
The Debian GNU/Linux 2.0 'Hamm' distribution was recently recognized by Australian Personal Computer Magazine http://www.apcmag.com/. It received the 'Highly Commended Award' for being 'a very high-quality distribution, with an extensive selection of carefully prepared software packages.
More information including a review of the distribution can be found at http://apcmag.com/reviews/9809/linux.htm.
Check out lug_support@ntlug.org
About this mailing list:
The purpose of this mailing list is to an open forum to discuss anything
related to starting, growing and maintaining Linux User Groups. Whether you
are trying to get a new LUG started and need some practical advice, or have
built one already and are willing to help other groups, this is the mailing
list for you, whether you have 5 members or 500!
How to subscribe to this list:
Send a message to majordomo@ntlug.org with the following text in the
message body:
subscribe lug_support YOUR_EMAIL_ADDRESS
Larry Wall received the First Annual Free Software Foundation Award for the Advancement of Free Software at the MIT Media Lab on Friday evening. At a reception and presentation attended by CPSR Conference registrants, computer hackers and members of the press, FSF Founder Richard Stallman presented the award, a quilted GNU (GNU's Not Unix) wall hanging.
Larry Wall won the Free Software Foundation Award for the Advancement of Free Software for his programming, most notably Perl, a robust scripting language for sophisticated text manipulation and system management. His other widely-used programs include rn (news reader), patch (development and distribution tool), metaconfig (a program that writes Configure scripts), and the Warp space-war game.
On Tuesday, September 15th, 1998 in Paris, France at our user event named "Eureka", and again on October 5th at DECUS in Los Angeles, Compaq Computer Corporation announced intent to extend their support to the Linux operating system to include Intel as well as the Alpha platforms. In addition to extending this support to another architecture, Compaq is in the process of putting together a comprehensive program of Linux support.
This support includes, but is not limited to:
In continuing the concept of working with the Linux community, Compaq intends to extend its Linux support through its extensive channels partner programs. Compaq feels that this will give the broadest possible selection of products and solutions to our end customers, with our VARs, OEMs, Distributors and Resellers working with the customer to match the distribution, the layered products and third party offerings to that customer's needs.
IBM announced that it has extended the HTTP services in IBM WebSphere* Application Servers in the areas of performance, security and platform support by adding new functionality to the HTTP services that are packaged with WebSphere and are based on the Apache HTTP Server.
Today's announcements include technology developed by IBM Research that boosts the performance of HTTP services in the IBM WebSphere Application Server and Secure Socket Layer (SSL) support that provides customers with the security necessary to operate a web site that can handle sensitive information, such as financial transactions. In addition, IBM announced a port of the Apache HTTP Server to the AS/400* operating system. The AS/400 port will be offered to the Apache Group through the Open Source model. The Fast Response Cache Accelerator (FRCA) technology, developed by IBM Research, doubles the performance of HTTP services in WebSphere on Windows NT, according to lab tests done by SPEC (The Standard Performance Evaluation Corporation).
Both the FRCA and SSL technologies from IBM will be available at no additional charge as part of all editions of the IBM WebSphere Application Servers. The technologies will be released in the next version of WebSphere Application Server before the end of the year. The FRCA technology will also be used to boost the performance of the HTTP Server for OS/390*, and will be available as part of OS/390 Version 2 Release 7 in March of 1999.
A new OpenSource project, a general distributed-computing toolkit for quick assembly of distributed applications called CommProc, was presented at the October ApacheCon98 conference. CommProc is an advocacy effort for Linux and Apache and includes an interface module for the Apache HTTP server. Documentation and source code for the project is available at: http://www.commproc.com
IDGs Web Publishing Inc. announced the launch of LinuxWorld magazine (http://www.linuxworld.com). A Web-only magazine supplying technical information to professional Linux users who are implementing Linux and related open source technologies in their computing environments.
Inside the first issue are stories such as an interview with Linus Torvalds, the first installment of the Linux 101 column titled "A distribution by any other name," and a look at the new Windows NT domain security features found in Samba 2.0 titled "Doing the NIS/NT Samba."
May 18 - 22, 1999
Raleigh, North Carolina
Dates for Refereed paper submissions
Program Committee
Overview
The goal of the technical track of Linux Expo is to bring together engineers and researchers doing innovative work associated with Linux.
See you at LinuxExpo '99!
Date: Wed, 28 Oct 1998 20:28:52 -0600
DevCon99 is scheduled for November 14 and 15, 1998. 49 hours of training
shoehorned into two days including "How to Use the Graphic Database
Frontend", Developing a RAD application, the introduction of the
SmartERP program, marketing, advertising and free videotapes for all
attendees. SmartWare2000, offers solutions for the small to medium company
and is the only product capable of running on everything from old 286 ATs
up to Sun or SiliconGraphics systems and use the same data at the same time.
When: November 13 (Dinner at Clubhouse Inn), 14 & 15
Where: Washburn University, Topeka, Kansas
What: 2 Full Days of training on SmartWare2000,
The Graphic Database Front End, RAD, Free
SmartWare2000, Food, Room, Videos and
more.
For more information:
Greg Palmer,
greg@mobiusmarketing.com
Linux FAQ: http://www.terracom.net/~kiesling/
QtEZ, a GUI builder for the Qt library: http://qtez.zax.net/qtez/
Blackmail 0.29: http://www.jsm-net.demon.co.uk/blackmail/source/blackmail-0.29.tar.gz
SGMLtools homepage: http://www.sgmltools.org/
Distribution Answers: http://www.angband.org/~joseph/linux/ Mini-howto on setting up Samba on Red Hat Linux: http://www.sfu.ca/~yzhang/linux/
PostgreSQL Database HOWTO: http://sunsite.unc.edu/LDP/HOWTO/PostgreSQL-HOWTO.html
DragonLinux: IronWing: http://members.tripod.com/~dragonlinux/
Cygnus(R) Solutions announced the availability of GNUPro(TM) Toolkit, a new product line of fully tested, low-cost development tools releases for native and embedded software developers. Addressing the needs of the growing Linux community, the first release of Cygnus GNUPro Toolkit is targeted at software engineers developing commercial applications on the Linux operating system (OS). Today's announcement marks the first in a series of GNUPro Toolkit releases planned for a range of platforms. "Cygnus has extended its commitment to the Linux community and users of Red Hat Linux by providing a fully-certified official release of GNU development tools," said Bob Young, president and CEO of Red Hat Software. "Given the increasing popularity of both Red Hat Linux and Cygnus GNUPro, Red Hat is pleased to continue its partnership with Cygnus to provide software developers the highest quality Linux operating system and development tools."
Key Features and Benefits
Pricing and Availability
Cygnus GNUPro Toolkit for Linux is priced at $149 and is immediately available for Red Hat Linux 4.2 and 5.1 on x86 platforms by ordering online at http://www.cygnus.com/gnupro/.
Panorama is part of the GNU project. For more information about it, visit the URL 'http://www.gnu.org'. It is released under the GPL license, that you can read in the file 'LICENSE' in this directory.
Panorama is a framework for 3D graphics production. This will include modeling, rendering, animating, post-processing, etc. Currently, there is no support for animation, but this will be added soon.
Functionally, it is structured as an API, composed by two dynamic libraries, and several plugins, that you can optionally load in runtime. A simple console mode front-end for this API is included in the package, that can load a scene description in one of the supported scene languages, and then outputs a single image file in any of the supported graphic formats.
Panorama can be easily extended with plugins. Plugins are simply dynamically linked C++ classes. You can add plugins without recompilation, and even in runtime, when this option is added to the graphic interface.
You can find more information about Panorama, or download latest distribution at: http://www.gnu.org/software/panorama/panorama.html
What is it:
The Netscape Wrapper is a bourne shell script used to invoke Netscape
on a Unix platform. It performs copying initial default files, a postscript
bug work around, security check, and setting up the environment. The new version
also provides enhanced functionality.
What is new in this version:
The most significant change is that the script will attempt to open a new browser before executing Netscape. IE if no Netscape process is present, Netscape will be executed, otherwise a new browser window is created. Likewise when using the new option subset, if Netscape is not running, it will be execute with that option as the default window, or if Netscape is running, that option will be opened using the current process.
ftp://ftp.psychosis.com/linux/netscape-wrapper_2.0.0
Version 2.1.6 of the GNU plotting utilities (plotutils) package is now available. This release includes a significantly enhanced version of the free C/C++ GNU libplot library for vector graphics, as well as seven command-line utilities oriented toward data plotting (graph, plot, tek2plot, plotfont, spline, ode, and double). A 130-page manual in texinfo format is included.
As of this release, GNU libplot can produce graphics files in Adobe Illustrator format. So you may now write C or C++ programs to draw vector graphics that Illustrator can edit. Also, the support for the free `idraw' and `xfig' drawing editors has been enhanced. For example, the file format used by xfig 3.2 is now supported.
RPM's for the plotutils package are available at ftp://ftp.redhat.com
For more details on the package, see its official Web page.
This is a new version of ProcMeter that has been re-written almost completely since the previous version.
It is now designed to be more user-friendly and customisable, the textual as well as graphical outputs and the extra mouse options avilable are part of this. It is perhaps now less of a system status monitor and more of a user information display. It can be configured to show the date and/or time instead of having a clock and it can also monitor your e-mail inbox and act like biff.
The ProcMeter program itself is a framework on which a number of modules (plugins) are loaded. More modules can be written as required to perform more monitoring and informational functions. Available at ftp://ftp.demon.co.uk/pub/unix/linux/X11/xutils/procmeter3-3.0.tgz
Take a look at the ProcMeter web page
Version 1.3.6 of an YP (NIS version 2) Server for Linux is released. It also runs under SunOS 4.1.x, Solaris 2.4 - 2.6, AIX, HP-UX, IRIX, Ultrix and OSF1 (alpha).
The programs are needed to turn your workstation in a NIS server. It contains ypserv, ypxfr, rpc.ypxfrd, rpc.yppasswdd, yppush, ypinit, revnetgroup, makedbm and /var/yp/Makefile.
ypserv 1.3.6 is available under the GNU General Public License.
You can get the latest version from: http://www-vt.uni-paderborn.de/~kukuk/linux/nis.html
MAM/VRS is a library for animated, interactive 3D graphics, written in C++. It works on Unix (tested on Linux, Solaris and Irix) and Windows 95/98/NT. MAM/VRS can produce output for many rendering systems: OpenGL (or Mesa), POVRay, RenderMan and VRML are supported yet. It provides bindings to many GUIs: Xt (Motif/Lesstif/Athena), Qt, Tcl/Tk, MFC and soon GTk. It is covered by the terms of the GNU LGPL. Visit our homepage for more information and to download it: http://wwwmath.uni-muenster.de/~mam
Description:
The Kim is interactive (ncurses) user friendly process manager
for OS Linux. It reads the /proc(5) directory. The '/proc' is
a pseudo-filesystem which is used as an interface to kernel
data structures.
Features:
Download:
* source, rpm, deb
URL: http://home/zf.jcu/cz/~zakkr/kim/
Version & Dependency:
The Kim is independent on other program, but all version depend
on libproc >= 1.2.6 and ncurses.
Lincense:
Copyright (c) 1998 Zak Karel "Zakkr"
PIKT is a set of programs, scripts, data and configuration files for administering networked workstations. PIKT's primary purpose is to monitor systems, report problems (usually by way of e-mail "alerts"), and fix those problems when possible. PIKT is not an end-user tool; it is (for now) to be used by systems administrators only.
PIKT includes an embedded scripting language, approaching in sophistication several of the other scripting languages, and introducing some new features perhaps never seen before.
PIKT also employs a sophisticated, centrally managed, per-machine/OS version control mechanism. You can, setting aside the PIKT language, even use it to version control your Perl, AWK, and shell scripts. Or, use it as a replacement for cron.
PIKT is freeware, distributed under the GNU copyleft.
Check out the Web Page for more info!
YARD-SQL version 4.04.03 has been released. Until now, it is available for Linux and SCO-UNIX and contains the following new features:
Check out http://www.yard.de for more information about YARD
ISS today announced RealSecure 3.0, a new, integrated system that combines intrusion detection with state-of-the-art response and management capabilities to form the industry's first threat management solution. Formerly known as "Project Lookout", RealSecure 3.0 is the integrates network- and system-based intrusion detection and response capabilities into a single enterprise threat management framework providing around-the-clock, end-to-end information protection.
Visit the ISS web site.
Applix, Inc. the release of Applixware 4.4.1 for the Linux platform as well as all major UNIX operating systems, Windows NT and Windows 95. This latest release delivers a new filtering framework that has been optimized for document interchange between Microsoft's Office 97 product, as well as Y2K compliance.
Applixware includes Applix Words, Spreadsheets, Graphics, Presents, and HTML Author. This Linux version also includes Applix Data and Applix Builder as standard modules. Applixware for Linux is available directly from Applix, as well as from its partners, including Red Hat and S.U.S.E.
Linux version beta test users also attest to the results. "Export of Applix Words documents to Word 97 works great, even with Swedish letters," said Klaus Neumann, a university cognitive scientist. He continued, "I think Applixware is the most promising office solution for Linux. I've tried StarOffice, WordPerfect, Lyx. Nothing comes even close to Applixware--there are none of the memory, uptime, printing, or spell-checking problems I experience with the other suites."
Applixware 4.4.1 for Linux includes for the first time Applix Data, a new module offering point and click access to information stored in relational databases. No SQL knowledge is required to access the information. Once accessed, the data can be linked directly into Applix Words, Spreadsheets, Graphics, Presents, and HTML Author.
Visit the company's web site for more information.
Servertec announced the release of iServer, a small, fast, scalable and easy to administer platform independent web server written entirely in JavaTM.
iServer is a web server for serving static web pages and a powerful application server for generating dynamic, data driving web pages using Java Servlets, iScript, Common Gateway Interface (CGI) and Server Side Includes (SSI).
iServer provides a robust, scalable platform for individuals, work groups and corporations to establish a web presense.
Visit the Servertec Web site for more information.
TkApache v1.0 was released unto the unexpecting world Thursday, October 16th. In it's first few hours, more than a 1,000 downloads were logged!
Anyway, it's a fully GUI front-end to managing and configuring an Apache web server and it's written in PerlTk - released under the GPL, developed COMPLETELY under Linux, Website, graphics, code, etc.
The TkApache home page could tell you a lot more...
WHY: Ran out of cash.
REALLY WHY: Lot of reasons, but then again, there are a lot of reasons that we got as far as we did. I think the killer reason, though, was that Golgotha was compared by publishers primarily to Battlezone and Uprising, and those titles sold really poorly.
WHAT NOW?: Now we file articles of dissolution w/ the secretary of state, and we file bankruptcy.
IS THAT IT?!: No.
WHAT ELSE?: We're releasing the Golgotha source code, and data to the public domain.
WHERE'S THE SOURCE CODE?: I want to personally thank everyone who supported & rooted for us. That was really nice of you.
BLAH BLAH, WHERE'S THE SOURCE?: I want to apologize to the fans and business partners we've let down.
BOO HOO! WE CARE. OUT WITH IT!: Thanks for your patience. The source & data can be found at http://www.crack.com/golgotha_release. And of course, the ex-Crack developers are up to new & interesting things which you can follow at the web site.
Sincerely, Dave Taylor
From ajrlly on 26 Sep 1998
is there a way to configure apache so that when someone requests a particular page (ie http://www.whatever.com/~user/index.html) that a cgi script is automatically invoked, transparent to the requestor. The goal is to have a diff page served depending on the ip address.
thnx
I think you could use the "x-bit hack" feature --- mark the index.html page as a Unix/Linux "executable" and use SSI (server-side include) directives to accomplish this.
There are also various modules for Apache to support XSSI (extended server-side includes) ePerl and EmbPerl (embedded perl interpreters which execute code your documents), and other forms of dynamic output.
For real details you should probably read the FAQ --- try http://www.apache.org for access to that.
In addition that FAQ recommends the comp.infosystems.www.servers.unix newsgroup for general support. There are also a couple of companies that offer commercial support for the system.
You can read about new developments in Apache by regularly visiting the Apache Week web site (http://www.apacheweek.com) Which should probably be right next to "Linux Weekly News" http://www.lwn.net, on your lists of weekly sites to visit.
Unfortunately they don't seem to have an "answer guy" over at Apache Week --- so we can't forward your question off to him.
Personally I don't like the idea of publishing different "apparently static" web pages based on the IP address of the requestor. First it seems deceitful. Also IP addresses and DNS domains (reverse or otherwise) are very poor ways of identifying users or readership. In addition these sorts of "dynamic" pages put extra load on the server and add additional latency to the request. This is a particularly bad idea for index.html pages --- which are the most often accessed.
I think it is best to identify what you really want the world to see (a process of writing, composition and publication) and put that into your main static web pages. If you need timely or periodic updates (web counters, whatever) use a cron job to periodically re "make" to static pages from their "sources" (templates) using the text processing tool of your choice (m4 and the C preprocessor, cpp seem to be particularly popular for this, although many specialized tools exist for the task).
Part of this also depends on what you really trying to do. For example if you want "internal" users to see one set of pages and "external" users to see another --- you best bet is to configure your server with two interfaces (or at least IP aliases) and use the Apache "Bind" directive to bind one copy of the Apache server to one IP address/interface and a different one (with different configuration, document root, etc) on the other).
Naturally each of your virtual hosts ("soft" using HTTP 1.1 features, or "hard" requiring separate IP addresses) can have completely different document roots and many other configuration differences. All of that falls under the category of "virtual hosting" and is covered pretty extensively in the Apache documentation (which is all avialable at the aforementioned web sites).
If you're trying to provide information in a different language or format based on the source of the request you should read about "Content Negotiation" at:
If you're attempting to do this based on "security" or "cookies" there extensive methods for doing this supported by Apache -- and most of them are most easily accomplished by performing "redirection" as the connection is established.
For real security --- to provide web pages to your "extranet" partners (strategic customers, vendors, etc) and your mobile employees --- I wouldn't suggest anything less then "client side certificates" over SSL (secure sockets layer --- a set of encryption protocols, proposed by Netscape and implemented by many browsers and in several other packages. The dominant "free" SSL code base is SSLeay --- by Eric A. Young of Australia).
These sorts of certificates are issued to users on and individual basis (they can be from a recognized third party CA --- certifying authority --- or you can create your own "in-house" CA and accept those, exclusively or otherwise.
There are a large number of modules available for Apache, some to things like block access based on the "Referrer" value (to prevent other web sites from using your pictures and bandwidth by "reference", for example), or to fix UpperVSLOWER/CasING/ problems in the requeste URL's, and a couple of different ones to perform rewriting of request URL's --- like the mod_rewrite module which supports full regex re-writes and some weird conditional and variable assignment features.
It appears that the "official" place to learn about Apache modules seems to be the "Module Registry" at http://www.zyzzyva.com
[ It moved to http://modules.apache.org/ which is much easier to remember too. Update your bookmarks, everybody -- Heather ]
From Frits Hoogland on 08 Oct 1998
Hi almighty answerguy!
I'm a bit confused by all the updates of various system components (like the libc, gcc, etc, etc). Is it advisable to loop at ftp.redhat.com for updates of my 5.0 system? Is it advisable to download a new kernel? Can I install let say kernel 2.0.35 (which, as I noticed, nearly everyone uses) or are there things I have to consider, things I have to check, etc.?
That's an excellent question. Using Red Hat's package management system (RPM) does make it faster and easier for most "mere mortals" (myself included) to upgrade most packages and install most new ones.
Debian package management is allegedly at least as good --- but it doesn't seem to be documented nearly as well so it's harder to learn than RPM. (Hey, Debian dudes if you write a DPKG/APT Guide for RPM users --- you might win more converts!).
Even Slackware's pkgadd (or is that pkg_add, it's been so long) is somewhat easier than the old "manly" way of upgrading your software (downloading the sources, and building them yourself).
Indeed, even that approach (building from sources) has improved quite a bit over the years, for most packages. The mark of a good modern package is that you can install it, from source with the following sequence of commands:
tar tzf /usr/local/from/foo.tar.gz # look at contents, insure that it creates # it's own directory tree and puts everything # in there: tar xzf .... cd /usr/local/src # extract the sources into our local source # tree. (Might need to do a mkdir and cd into # that if your package extracts to the "current" # directory). cd $package_source_top_level_dir view README # or 'more' or 'less' -- maybe different README's # This should have some basica instructions # which ideally will amount to: ./configure # possibly with options like --prefix=/usr/local make install
... Note that the really important parts of this are './configure' and 'make install' After that a good source package should be ready to configure and run.
(Note that the "configure" command in my examples is a script generated to perform a set of local tests and select various definitions that are to be used by the make file. This, in turn, tells the local system how to "make" the program's binaries, libraries, man pages and other files in a way that's suitable for your system --- and (with the commonly implemented "install" option or "target" in "makefile" terms) tells the 'make' command where to put things. There is a difference between "./configure"-ing the sources to be build and "configuring" the resulting package).
In any event, with RPM's you get the package (for your plattform: x86, Alpha, SPARC, PowerPC, etc) and type:
rpm -i foo-$VERSION.$PLATFORM.rpm
... or whatever the file's name is. To upgrade a source package you follow mostly the same procedure for sources (usually saving any configuration files and/or data from the previous versions, and maybe moving or renaming all of the old libs and bins and other files). It would be nice if source package maintainers make upgrades easier by detecting their prior installed version and suggesting a "make upgrade" command --- or something like that).
To upgrade a typical RPM you just type:
rpm -U foo.....rpm
There are similar commands for Debian, but I don't know them off the top of my head and they aren't handy to look up from this S.u.S.E. (RPM) system.
(I'm sure I'll get flamed for the perceived slight --- oh well. Comes with the territory. Please include techie info and examples with the flames).
Now, when it comes to major system upgrades (like libc5 to glibc, and from a 1.x kernel to a 2.x kernel) it's a different matter.
If you have a libc5 system and you just install glibc unto it; there's no real problem. The two will co-exist. All of your existing libc5 programs will continue to load their shared libraries, and all your glibc2 (a.k.a. libc6) linked programs should find the new library. Running a mixture of "typical" programs from both libraries will have not important effects (although you'll be using more memory than you would if all you binaries are linked against the same libraries.
Notice I said "typical" --- when it comes to system utilities, like the 'login' command there are some interactions that are significant and even incompatible. I've heard that the format of the "utmp" and "wtmp" records are different (these are user and "who" log files) which are accessed by a whole suite of different utilities (like the 'who' and 'w' commands, the 'login' and 'xdm' commands, 'screen' and other utilities).
So, it's best to upgrade the whole system to glibc at once. (The occasional application, one that is not part of the base "system" and isn't "low level" that uses a different version/set of libraries won't be a problem).
With most recent kernels you can install the sources under /usr/src/linux and running the following command:
make menuconfig
(go through a long list of options to customize the kernel to your system and preferences)... or copy in your old .config file and type:
make oldconfig
... to focus on just the differences between the options listed/chosen for your previous kernel and the new one.
Then you'd type something like:
make clean dep install modules modules_install
... and wait awhile.
I've done kernel upgrades that were that easy. Usually I read the "changelog" and other text files, and the help screens on most of the new options (I usually also have to refresh my memory on a couple dozen old options).
These are major upgrades because they can affect the operation of your whole system.
Recently my father (studying Mathematica) needed a better video card. This was an old VLB (VESA Local Bus) 486 running Red Hat 4.1. So I decided to build a new system (Pentium 166, PCI video card, new 6Gb UDMA hard disk) and upgrade his system to Red Hat 5.1.
So, here's how I did that:
Build new hardware, boot from a customized copy of Tom Oehser's "root/boot" diskette (http://www.toms.net/rb) and connect to the LAN using a temporary IP address that we have reserved for this purpose.
I then run fdisk on the new drive and issue a command like:
for i in 1 3 5 6 7; do mk2efs -c /dev/hda$i; done
(to make filesystems all all of the partitions, root, rescue root, /usr, /home and /usr/local). I go away and answer e-mail and get some coffee, getting thoroughly side tracked.
A day or so later I remember to finish work on that (he reminds me that he has some homework to do).
Now I mount all of these filesystems they way I want them to be later (when I reboot from the hard disk). So I mount the new rootfs under /mnt/den (the machine's name is Deneb --- an obscure star) and the new usr under /mnt/den/usr and the new /usr/local under /mnt/den/usr/local (etc).
Then I copy his old /etc/passwd and /etc/group file into the ram disk (see below) and issue a command like the following:
rsh deneb "cd / && find . | cpio -o0BH crc " \
| ( cd /mnt/den && cpio -ivumd )
... this copies his entire existing system to the new system.
When that's done (doesn't take long, but I don't time it --- it runs unattended until I get back to it), I edit the /mnt/den/etc/fstab, run a chroot command (cd /mnt/den && chroot . /bin/sh) fix up the lilo.conf file and run /sbin/lilo, and reboot (with the root/boot diskette removed.
Now I've replicated his whole system (and accidently knocked his off of our LAN because I forgot to reset the IP address of this box). So, I fix that.
I make a last check to make sure that everything *really* did copy over like I wanted:
cd / ; rsh den " cd / && tar cf - . " | tar df -
... this copies all of his file back over the net again (this time using 'tar' instead of cpio), but the receiving copy just compares (diffs) the incoming file "archive" (from it's standard input, a.k.a. the pipelined data) rather than extracting them and writing them into place.
This reports a few differences among some log files, the /etc/ files that I modified, and it gives some errors about "sockets" (Unix domain sockets show up in your file tree as little filenames with the "s" as the leading characters in an 'ls -l' output; there are about five or six of these on a typical system, one for your printer, one for you syslog and one or two for any copies of X Windows or 'screen' you may have run. These should not be confused with "internet domain" sockets which only exist in memory and go through your IP interfaces).
I presume that the tar diff feature simply doesn't support Unix domain sockets, it's probably a bug, but not a significant one to me.
A different bug in 'cpio' is a bit irritating and I have to report it to their maintainer. Remember how I copied over my old passwd and group files before the transfer? There's *supposed* to be an option to "use the numeric UID and GID" (-n or --numeric-uid-gid) in 'cpio' (and a similar one in newer versions of 'tar'). However, my copies (on several machines from several distributions around the house) all just give an error if I try to use that switch. Not a reasonable error message like: "option not supported" or "don't do that you moron" --- just a stubborn insistence on printing the "usage" summary which clearly shows these options as available!
The quickest workaround is to just copy of the passwd and group files to the local system before attempting the "restore" (replication). One time when I failed to do this (using a version of 'tar' that didn't support the feature) it squashed the ownership of every file to 'root' --- resulting in a useless system. Luckilly I was just playing around that other time, so I learned to watch out for that.
So, now I just slip in my Red Hat 5.1 CD (courtesy of the IBM booth at last week's ISPCon in San Jose --- where they were giving them out). I think IBM's booth got them from the BALUG (Bay Area Linux Users Group) which is still trying to scrape up a few hundred bucks to pay for the 'free' booth they were offered. (Free means no fees to the convention co-ordinators; we went out of pocket for "renting" tables and chairs and paying for a power extension for the demo computer).
From there I just let the RH 5.1 upgrade process run its course.
What!?! All that work just to let run the upgrade?
Yep!
I spent years in technical support (MS-DOS and Windows markets). I consider vendor and OS upgrades to be the most dangerous thing you can do on your computer. I'm sure they've caused more downtime and lost data than failed hard drives, virus infections, stupid users (other than the stupidity of blind updates), and disgruntled employees.
So, I like to make sure that I can get back to where I started. For me, that means "start with a copy of the system or a restore of the system's backups" I've been known to take a hard drive out of a system, install a new one, restore the backup to that and then to the restore to the "new" system. (The old hard drive stays on a shelf until the data on it is so out of date it would be worth copying back in --- then gets rolled into some other system, or the same one, as "extra disk space").
Now, please don't take this as a personal attack on Linux or Red Hat. So far they haven't failed me. I have yet to see an "upgrade" destroy one of my systems. However, my professional experience as proven to me that this is the right approach even for one of my home systems.
In this case the upgrade was silky smooth. I had to fuss a little to get in a new X server (different video card, remember) and the new system really didn't like the PS/2 mouse that I have it (which was of no consequence, since my father uses a serial port trackball, so the mouse that I had in /dev/psaux was just a temporary one anyway).
Oh, yeah, I had to compile a new kernel to support the ethernet card I'm using (a Tulip based Netgear). There was probably a module laying around the CD for it somewhere --- but so what. It's a good test for the system.
At this point the old computer sitting in the living room, and the new one is in his room running Mathematica. In a week or so (when we're really convinced that everything is really O.K with the new box) I'll prep that old 486 up as a server (my colocated server is do for an upgrade --- so this one will go in for it, and that one will become the spare and test bed).
I can understand how most users don't want to have whole systems around as spares. However, these days, it's not too expensive to keep an extra 6Gb hard drive laying around for these sorts of "major" upgrades. It's also a good way to insure that your backups are working (if you use the "restore" method rather than a network "replication" trick).
Note that this whole process, as complicated as it sounds, only takes a little more "human" time than just slipping in the CD and doing it blindly. The system keeps pretty busy --- but I don't wait for it, I only issued 10 commands are so (I have a couple of scripts for "tardiff" and "replicate" to save some of the typing).
For the daring you can run a program called 'rpmwatch' (or Red Hat or other RPM based systems) or "autoup.sh" (Debian). You point these at your favorite mirror and they will automatically update new packages.
Of course this is "daring" --- I personally wouldn't consider it from any public mirror site. I have recommended it to corporate customers, where they point their client systems at an internal server and their staff posts rpm's after testing (limited automated deployment). This is a little easier for some sorts of upgrades than using 'rdist' and home brewed scripts.
In terms of upgrades --- my main "gateway" (router, server, mailhost, uucp hub, and internal web server) is an old 386/33 --- it's about a decade old, has 32Mb of RAM and single, full SCSI chain with a few Gig of disk space). It runs an old copy of RH 4.2, which is an upgrade (disk replication/swap method) from 3.03, which is an upgrade from Slackware 1.(something), which was an upgrade (wipe and re-install from scratch) from SLS something (running a 0.99p10 kernel).
I used to use that machine (antares) for everything (even X --- it has a 2mb STB Powergraph video card that cost more than a new motherboard would have). However, these days I mostly use 'canopus' -- a S.u.S.E. 5.1 upgraded to 5.3 (blindly --- I was feeling lazy!) My wife mostly uses her workstation, 'betelgeuse' --- which came from VA Research with some version of Red Hat (read the review I wrote for Linux Journal if you're really curious) --- and was upgraded (new installation on new drive, copy the data) to S.u.S.E. 5.2.
So, you can see that we've used a variety of upgrade strategies around the house over the years.
As for installing a new kernel. Do a good backup. Now ask: Can I afford a bit of down time if I break the system (while I restore it)? If the answer is "Yes" than go get a 2.1.124 (or later) kernel and try that. We're getting really close to 2.2 and only a few thousand people have tried the new kernels. So we want lots of people to at least try the current releases before we finally go to 2.2.
(Linus doesn't want to have 36 "fix" releases to the next kernel series).
The new kernel should be much faster in TCP/IP performance (already one of the fastest on the platform) and much, much faster in filesystem performance (using the new dcache system).
So, try the new kernel. Be sure to get a new copy of pppd if you use that --- the kernel does change some structure or interface that the old version trips on.
This upgrade will not be nearly as big a deal as the 1.2 to 2.0 shift (which was the most disruptive in the history of Linux as far as I can tell --- the formats of entries under /proc changed, so it broke the whole procps suite of utilities, like the 'ps' and 'top' commands). I haven't seen any such problems from the 2.0 to 2.1 kernels (I'm running a 2.1.123 at the moment, on canopus. antares is running on 2.0.33 or so --- it is least frequently upgraded because it is the server.
Looking forward to your answer. Frits.
From Dave Barker on 08 Sep 1998
I'm trying to setup a Linux RH 5.0 box as a dial in server using a DigiBoard C/X host adaptor and a 16 port C/Con 16 Concentrator. What I'd like to know is:
Does Linux support Software controlled Serial Ports, meaning the current attempt has been to set up a 15MB dos partition as the boot, and install the DOS drivers from Digi, and then add the com ports into linux?
The question (as stated) is flawed. On a PC all multi-port serial boards require some form of software to control them (since the BIOS/firmware only supports 4 COM ports). In addition the BIOS support for COM ports is extremely limited and slow --- so all communications software (under DOS, Windows, OS/2, Linux and other forms of UNIX, etc) have bypassed the firmware and used direct access to the I/O ports that control the serial ports as well as those that actually carry the serial data).
So, a more reasonably restatement of your question might be:
Can Linux use the DOS drivers supplied by Digi?
... that answer is "no." (It is also "no" for NT, other forms of UNIX, OS/2 and probably for Win '95/'98 as well).
Device drivers are generally not portably between operating systems. The interface between an OS kernel and its device driver is typically unique for each OS. There has been some discussion about creating a portable PC device driver specification --- to allows SCO, Solaris, Linux, and *BSD drivers to share (at least some) device drivers. That will probably never happen --- and even if it does it will probably never extend to non-Unix-like operating systems.
Now, regarding the broader question:
Does Linux support the Digi C/X intelligent serial port subsystem?
When I last corresponded with Digi about this they assured me that they would have native Linux drivers by the end of that summer. That was over a year ago. I did check back on their web site a couple of months ago and it didn't seem to indicate that they'd ever delivered on this promise.
The obvious thing to do would be to contact their support staff and ask for native Linux drivers. It may be that their web site is out of date, or that my efforts t weed through their pages was inadequate (the bigger the company, the worse their web site).
[ I dunno about the Digi International site, (which is being redesigned right now) but the Linux Digiboard Page might be useful, even though it's rather old. -- Heather ]
Next if this would work in theory what is the proper way to go about setting the serial ports up?
The "proper way" would be to use the device drivers that work with your OS. Another way, might be to run the DOS drivers under 'dosemu' (configuring that with sufficient access to the hardware and I/O permissions map that the Linux kernel will let it drive the serial board). However, that would only allow DOS to access the devices.
In the project where I initially contacted them I was using an operating system called TSX-32 (by S&H systems: http://www.sandh.com) --- and the TSX-BBS (also by them).
This package is a 32-bit multi-user commercial (closed source) OS that's modeled after the old TSX-11 and RSX-11 (a predecessor to VMS on the PDP-11 platform). It also runs a decent DOS emulator and the BBS has some nice features that make it more scalable than any other that I'd seen
(I've run Galacticomm MajorBBS and eSoft TBBS systems which used to be limited to single CPU's, had no TCP/IP support, no end-user scripting facilities, limited support for doors, and little or no support for intelligent serial hardware --- such that 255 was about the maximum --- PCBoard as limited to about 4 to 8 lines per PC --- and you needed a Netware server to cluster those. TSX-BBS can handle 250 lines per server and multiple servers can peer over TCP/IP for a potential of thousands of lines).
Obviously Linux (and other forms of Unix) have that sort of scaleability --- given the drivers. There are some big Unix/Linux BBS' out there (like MMB "Teammate" and a native port of Galacticomm's BBS --- renamed to something like "WorldPort" --- though I don't remember the details).
My enthusiasm for TSX-BBS has waned a bit (they aren't getting out the updates that I'd hoped for). However, that's a non-issue since I left that position long ago and no longer have to maintain any BBS' (and the whole dial-up bulletin board system phenomenon seems to have waned almost as much as UUCP).
I'm really in a bind here, and could use any help I can get! Thanks in advance David Barker jaeckyl@execpc.com
I would beat up on Digi --- and, if they don't satisfy your needs --- consider one of their other models (a number of them do support Linux) or Rocketport, Equinox, Comtrol, Cyclades, Stallion, or other vendors of multi-port serial hardware that will support Linux.
Naturally I understand that this may entail major reworking of your plan. The C/X is the only system that I know of that allows you to connect 250 ports to a PC using only one bus slot. You might have to rework your plan to use multiple Linux systems, each with multiple serial port board, and configure those as "terminal servers" (have them binding the incoming serial/phone connections into 8-bit clean rlogin/telnet sessions to the master server.
Of course you could also look at traditional terminal servers. These are router-like devices that do precisely what I've described (often with support for their own authentication --- RADIUS and/or various versions of TACACS, and support to provide PPP and/or SLIP instead of the rlogin/telnet like session that would be used for dial-up terminal use.
Naturally to give you better advice and more specifics I'd have to know more about your requirements and less about the particular problems you're encountered with your currently proposed solution. All I can currently guess about your requirements is that you need support for a bunch of serial lines (you said "dial-in" so I can also guess that a bunch of modems are involved).
If you already purchased the C/X and you already selected Linux for this project then shame on you --- you really need to do requirements analysis before committing to various implementation details (specific hardware and software). (If you're just stuck with some stuff you had laying about for a skunkworks project --- then my condolences; you might need to negotiate some funding for it).
From Jack on 08 Oct 1998
Hi, I am a huge fan of Linux and all that GNU stands for. I got my first taste of Linux back in the 0.99 days. Since that time I have poked and prodded along with different flavors of installation, but due to my work environment I have never been able to jump in with both feet. I have finally scraped together a modest system for my home use that I can dedicate to Linux, and wanted to get it set up as a network server. I have been reading articles, HOWTOs, and the like about setting up network access. Each of the refrences have always begun past the step where I am getting hung up. I cannot get the system to recognize my eithernet card.
True it is an NE2000 clone (cheap), but Win95 recognizes it just fine and the software packaged with it has no trouble locating the card, nor does the plug-n-pray BIOS. I read the Eithernet-HOWTO which tells about issuing commands during the lilo startup and tried that with the values returned from the packaged diagnostic software. I'm hoping this is just something I'm overlooking or mis-read or didn't read(yet), and not a situation where I need to upgrade the ethernet card (my last option). I came to you as my next to last option since you have given so many people good advice in the past. I hope you can help and look forward to hearing from you soon.
Jack.
Replace the ethernet card. Go get a Netgear (Tulip based) or a 3Com (3c509).
I had exactly the same problem when I first tried to configure a Linux system to use some NE2000 clone.
It probably has nothing to do with Plug-n-pray or IRQ's or I/O ports, or kernel drivers, or options, or your ifconfig command. I tried everything with that card --- and it just plain wasn't recognized!
Trust me. You can spend $30 (US) or less on a supported ethernet card and it will almost certainly just work. Or you can spend the same hours I did thinking "I must be stupid" and then go buy the card.
The term NE2000 should be taken with a large block of salt (the kind we use to haul out to the horse pastures when I was a kid). They are "NE2000 compatible" in about the same way as a "winmodem" is "Hayes compatible" --- only when you use them through the "supported" software.
True hardware level compatability involves using the exact same sets of I/O ports and other adnctions as the original. Most of the cheap NE2000 just aren't compatible, they have to supply their versions of the drivers --- and they only achieve "compatibility" through the software interfaces.
From peter.wiersig@db.com on 23 Oct 1998
You're certainly entitled to your opinion, and you could get much response and satisfaction (if you are troll) by
I think I had been troll by sending my mail to the Answer Guy.
I didn't think you were trolling me.
None of this is relevant to Linux. Most Linux users are sophisticated enough to simply ignore these threads (clearly I fail that test, myself).
I apologize. I had found the Linux Gazette lately and read many of the issues in the last days. I was impressed by your column and I am now enlighted, that I've contacted you in the wrong context. I hope I will think more about what to send to whom in the future.
No apology is necessary. You stated an opinion, I disagreed with it. The discussion is mildly offtopic. I pointed that out.
Some have referred to "Good Times" hoaxes as "mental viruses" --- more power to them. By their token *you* have been infected since you have revived (brought back to life) the discussion.
Yeah, I should be more careful on the decision where I put my focus on.
Hope I stole not too much of your time.
No problem. I'm the one that volunteers for this gig.
Yours, Peter Wiersig
(If any of the above don't make much sense, maybe because english is not my first language.)
Sometimes I'm not sure my English is up to snuff either.
From Axel Harvey on 08 Sep 1998
Further to my verbose text of last night. I realized after spouting away that accent and symbol keys will not elicit any response from the shell command line (except a rude beep) but they do work perfectly well in text-receiving functions like pico and pine. I, poor sap, had been modifying and reloading my test keymap, then seeing if the keys worked from the command line...!??!
I should have mentioned that the RedHat distribution has a French keyboard which works, but I don't find the layout very useful. I shall send them my own version of a French keyboard as soon as I have refined it sufficiently.
Yep.
I still don't understand my problem with X installation.
Have you tried the suggestions I already gave?
Are you plagued by questions from stupid guys who could solve their own mysteries with one more hour of futzing?
That's not stupidity. Hours of "futzing" is experience. I personally feel that the hours of "futzing" that I've done should benefit others besides myself.
However, I am plagued by questions that are more readily answered in HOWTO's and even some questions that I can only categorize as stupid (like phreak/cracker wannabe's and jobhunting AS/400 specialists).
From Tom Watson on 09 Sep 1998
Answer guy:
In several questions, I see people asking about booting large disks with older machines. A while ago I built up a machine like this, and had a reasonable solution.
While I can understand that the problem is often interference with the "other" operating systems, in the case I used, I had an older '486 motherboard which given the requirements for windoze was a bit out of date, but quite reasonable for Linux. In attempting a "normal" installation on the larger disk, the first (root) partition was quite a bit over the 540 MB BIOS limitation. It took me a few attempts at running LILO and understanding its error messages (I even tried the "linear" option) to understand the problem (I hadn't used a disk that large before). When I remembered the "540MB" problem, the solution that I explained above seemed the "easiest" to implement, and with the least amount of "hassle". It only took a few symbolic links/copies and I was done. The "basic" root partition was still intact, and nobody really worried about the difference. I feel that if I wanted to, I could have made a "small" partition and installed the "root" files there, but most installations want a larger partition to get "the whole works" installed.
Sure this gets around the "1024 cylinder" problem, but usually that is all that is needed. Linux, once it has started the kernel, supplies the drivers for further operation. The small partition is only used to accomidate the BIOS, whose ONLY function is to load the kernel anyway.
I suppose an altrenative is to use "loadlin" under dos, but you still need to boot DOS, and the 1024 cylinder problem comes up again.
I'm trying to get a solution that involves "minimum impact".
--
Tom Watson I'm at work now (Generic short signature)
All true enough. My point is that my answers are not simply intended to show what works for me or what is minimum impact for one situation (usually a situation about which I'm given very little information). My answers try to cover a large number of likely situations and to suggest things that no one seems to have tried before (like the auto-rescue configuration I was babbling about).
I've frequently suggested small partitions, LOADLIN, alternate root partitions, even floppy drive kernels.
Another problem --- the source of this whole thread has to do with support for these Ultra-DMA drives that are over 8Gb. I guess that the most recent kernels (2.0.35 and later?) have support for a different IDE driver that can handle these. I thought I'd seen reports of problems with them. I commented on this in the hopes that someone would give me the scoop on those.
From Christopher & Eunjoo Kern on 21 Oct 1998
Mr. James Dennis:
I had been given your name as a reference from a coworker of mine. He has told me that you often answer the most difficult of questions regarding Windows 95 and Linux software. Are you indeed the fellow my friend speaks of, and could you possibly answer a question or two of mine?
Kern.
I actually answer Linux questions. I get alot of Win '95 questions, which I grudgingly answer to some small part of my extremely limited ability.
Although I bump into Win '95 occasionally at my customer's sites and with some friends, I probably have clocked in less than 10 hours of "driving time" on it and NT and Win '98 all told.
I answer Linux questions for free for the Linux Gazette. You can see many of those by pointing your web browser at: http://www.linuxgazette.com (they also have a nifty search feature).
Linuz Gazette is part of the Linux Documentation Project (LDP: http://www.sunsite.unc.edu). That's the best resource for basic Linux information.
So, what are your questions?
From Victor J. McCoy on 11 Oct 1998
Please publish this. After the original question, I received a number of inquiries indicating that I'm not the only one with this problem.
Last year LG22 (Oct97) I asked a question regarding window manager and pppd anomaly.
Quick answer: Num-Lock key activation.
Long answer:
I finally got fed up with the problem, so I tore my machine apart, down to a single SCSI controller (with only my root drive), keyboard, modem.
The problem continued. I upgraded to redhat 5.0 since, and the problem persisted, different Window managers also exhibit problems (just differently). I even changed to Caldera lite 1.2, and I still had the problem.
Believe it or not, it turned out to be my NUM-LOCK key. If NL is active, then the WM screws up EVERY TIME; different WMs just screw up differently. I would turn on Num-lock to ping a host IP addr and that would be all it took.
I have a MS natural keyboard (Of all the products MS has, software sucks, but I love the hardware [KB and Mouse].) I'm sure that's not the problem, because I've only recently acquired the KB and the problem has been around for a couple of years.
I would like to know if this is a widespread problem. Surely, I'm not one of very few people to use the numeric keypad.
Victor J. McCoy
I'll just let this message speak for itself. I'm not really sure what it's saying but it sounds like a call for other people to do some testing. What happens if you have your NumLock key active when you start up and user your ppp link from within an X session.
As for ways to try to alleviate the problem:
... I don't know of good ways to troubleshoot or isolate this, off hand. Look in your syslog (/var/log/messages) for "ooops" or similar messages from the kernel. Try strace on the X server and the window managers, the pppd (also run it under full debugging and kdegug). Try adding ipfwadm (or ipchains) rules with the -o switch (to "output" details of every packet to your syslogs). You could also capture tcpdump's from the ppp0 interface during the whole affair.
It's probably something specific to your hardware --- since you've changed your software completely. I'll be curious if you can isolate the problem to a specific library function or system call.
From tng on 14 Sep 1998
I've been searching for 3 days on setting up some kind of e-mail quota to restrict the abount of e-mail that can be sent by aparticular person. I been to altavista did a search that turned up 1700 maches none of which were of any help. I went to sendmail.org and browsed their their online documentation, gone through news group archives to find myself still wondering if there is software available to do it. I found lots of info about setting up bulk e-mailers and stopping incomming spam but nothing for stopping my local users from bulk e-mailing and spamming others. I would be greatful for any help on this matter.
thanks in advance... tng
Well, that's a new one. I don't know of any package that does this.
I'm sure it can be done --- you could define a custom mailer (one of those funny looking Mprog lines in a sendmail.cf file). Let's call this the quota mailer --- you'd then define that as the mailer to over-ride the built-in smtp mailer. You're quota mailer could then be reponsible for counting messages, addresses, bytes, etc and updating a database of authorized users and relayers --- and then relaying the mail into a queue where a different sendmail (using a different configuration) would send it out (probably as a regular 'cron' job).
The quickest way to get such a beast built might be to hire a consultant like Robert Harker (he specializes in 'sendmail' and teaches tutorials in it http://www.harker.com).
For qmail or VMailer there might be an easier way.
Another problem you'll have with this is that you'd have to prevent people from bypassing all of your mail user agents and sending their mail using some custom program that they've installed themselves. This could work by simply opening a TCP connection to the smtp port (25) of their addressee's sites (or any open relayer) directly. You'd have to put packet filters on all of your egress routes (firewalls and border routers) to prevent this, thus forcing your customers/user to use your outbound relay.
There are several commercial products that do filtering of outbound mail (MIMESweeper, WebShield, that sort of thing). They purport to protect companies from insiders who might be mailing company secrets out to their competitors. In general I think this is a pathetic approach to the risk (they can easily shove the data on a diskette, Zip disk or whatever, and mail it; or they can encrypt it --- using pkzip with it's "scramble" encryption and mail that as a "bitmap" --- or they can use freely available tools to do some serious steganography).
However, these "mail firewalls" may be adaptable to your needs. Also, there may be some free one floating around that I haven't heard of.
The best place to ask for more info on this is in the comp.mail.sendmail newsgroup (I don't know of a general mail transfer agents newsgroup -- so c.m.sendmail seems to get all of that traffic. I expect there'll be a comp.mail.qmail and a comp.mail.vmailer eventually).
I suppose you could also ask in comp.security.firewalls --- and you could dig up the mailing lists for qmail, VMailer and the firewalls mailing list (which recently moved off of Brent's site at Great Circle Associates and is hosted by some friends of his at GNAC) --- you'll have to spend some quality Yahoo!/Deja News/Alta Vista time hunting down those venues.
From anonymous on the L.U.S.T List on 2 Sep 1998
And there will be no human to manually check on the partitions after a power failure.
What's wrong with e2sck? TTYL!
I was thinking about this recently and I came upon an intereseting idea. (I think a friend of mine used the following trick in a commercial product he built around Linux).
The trick is to install two root filesystems (preferably on different drives -- possibly even on different controllers). One of them is the "Rescue Root" the other is the "Production Root." You then configure the "rescue root" partition as the default LILO device and modify the shutdown sequence to over-ride that default with an /sbin/lilo -R command.
If the system boots from the rescue root it is because the system was booted irregularly. The standard shutdown sequence was not run. That rescue root can then do various diagnostics on the product root and other filesystems. If necessary it can newfs and restore the full production environment (from another, normally unused, directory partition or drive). The design of the rescue root is a matter for some consideration and research.
Normally the system will boot into "production" mode. Periodically it can mount the alternative root fs to do filesystem checks and/or an extra filesystem to do backups (of changes to the configuration files). You can ensure that these configuration backups are done under a version control system so that degenerative sets of changes can be automatically backed out in an orderly fashion.
If you combine this with a watchdog timer card and a set of appropriate system monitoring daemons (which all talk to a dispatch that periodically resets the watchdog timer), you should have a system that has about the most bulletproof autorecovery as is possible on PC equipment.
I should note that I haven't prototyped such a system yet. I've just thought of it. A friend of mine also suggested that we devise a way to have another proximate system also doing monitoring (possibly via a null modem). He says he knows how to make a special cable which would plug into the guard dog's printer/parallel port (guard dog is what I've been calling the hypothetical proximal system) and would be run into the case of the system we're monitoring where it would be fit over the reset pins. This, with a small driver should be able to strobe the reset line.
(In fact I joked that we could create a really special cable that would daisy chain to as many as eight other systems and allow independent reboot of any of them).
In any event the monitor system would presumably monitor some/most of the same things as the watchdog timer; so I don't know what benefit it would ultimately offer (unless it was prepared to do or initiate failover to another standby system).
Perhaps this idea might be of interest to the maintainer of the High-Availability HOWTO (Harald Milz -- whom I've blind copied on this message). It's not really "High Availability" but "Automated Recovery" which might be sufficiently close for many applications. (i.e. if a web, mail, dns, or ftp server's downtime can be reduced from "mean hours per incident" to "mean minutes per incident" most sysadmins still get lots of points).
From R P Herrold on 04 Sep 1998
We build custom Linux solution boxen. In our Build outline, we take this concept a step further in setting up a redhat system -- we carry a spare /boot partition:
(extract) (base 5.0 install) Part name size Cyl cume actual min ==== ========== ==== ==== ==== ========== 1 /boot 20 ___ 20 2 root 30 ___ 50 23 (/bin ___ M) (/lib ___ M) modules (/root ___ M) (/sbin ___ M) 3 swap 30 ___ 80 4 (extended) 5 /mnt/spare 30 ___ 110 1
... The minima in a 'stripped down' / [root] partition vary depending on where /lib, /var, and /usr end up -- of late, a lot of distributions' packages feel a need to live in /bin or /sbin unnecessarily -- and probably should be in the /usr tree ... Likewise, if a package is NOT statically linked, one can end up with problems, if a partition randomly decides to 'go south.'
I was thinking about this recently and I came upon an intereseting idea. (I think a friend of mine used the following trick in a commercial product he built around Linux).
... We use the 'trick' as well
The trick is to install two root filesystems (preferably on different drives -- possibly even on different controllers). One of them is the "Rescue Root" the other is the "Production Root." You then configure the "rescue root" partition as the default LILO device and modify the shutdown sequence to over-ride that default with an /sbin/lilo -R command.
... carrying the full [root] partition
I should note that I haven't prototyped such a system yet. I've just thought of it. A friend of mine also suggested that we devise
... It works, and can avoid the need to keep a live floppy drive in a host which would otherwise require one for emergency purposes ... aiding in avoiding physical security issues
[ normally I remove sig blocks, but since he copyrighted his... I guess I'll leave it in. Curious one should post a copyright into open mailing lists, though. -- Heather ]
.-- -... ---.. ... -.- -.--
Copyright (C) 1998 R P Herrold
herrold@usa.net NIC: RPH5 (US)
My words are not deathless prose,
but they are mine.
Owl River Company 614 - 221 - 0695
"The World is Open to Linux (tm)"
... Open Source LINUX solutions ...
info@owlriver.com
From Joe Wronkowski on 23 Oct 1998
Hi Jim,
I was just wondering if there was an easy way to grab the
time/date Oct 6 21:57:33 from the same line the
153.37.55.** was taken from.
If you are busy I understand.
Thanks
Joe Wronkowski
The following awk expression will isolate the date and time from these example lines:
date="$1 $2"
time="$3"
sample of log file:
Oct 6 21:50:19 rogueserver in.telnetd[197]: connect from 208.224.174.21 Oct 6 21:50:24 rogueserver telnetd[197]: ttloop: peer died: Success Oct 6 21:55:29 rogueserver in.telnetd[211]: connect from 208.224.174.21 Oct 6 21:55:35 rogueserver telnetd[211]: ttloop: peer died: Success Oct 6 21:57:33 rogueserver in.pop3d[215]: connect from 153.37.55.65 Oct 6 21:57:34 rogueserver in.pop3d[215]: Servicing request for rogue
The original message in this thread appeared in Issue 31, and the comp.unix.questions newsgroup. This volley resulted:
From Michael Schwab on 30 Sep 1998
Hello, I just read the article about
So please don't say "WHO CARES ABOUT THAT?"
I'll say what I bloody well feel like saying!
(And I expect my editors, and readership to respond as they see fit).
I'm sorry if that seems like a harsh thing to say but frankly I think you missed the whole point of what I was trying to say before.
First, I don't know of a general way to get the connect speed. It's probably modem specific, so you could probably write a script that queries your modem to get the info. Your script would probably not work with most other modems, and you'd have to hack it into whatever communications package you were actually using on that modem (pppd, uucico, minicom, kermit, slip, whatever).
Another point I made is that the speed is likely to fluctuate frequently throughout the duration of a connection (particularly with any 28.8 or faster modem). It's likely to start out a bit high and be adjusted downward until the corrected error rate attains an suitable threshold.
So, if you had your script reporting a connection speed at one instant --- it would tell you almost nothing about your sustained throughput.
I do !!
More power to you. I didn't ask who cares? I asked what benefit those who "think" they care hope to derive from this statistic.
I can test the temperature in my house to with an arbitrary precision. However, the numbers on a thermometer will not motivate me to reset my thermostat or go out and buy a new furnace or air conditioner. It's a meaningless statistic that is no benefit to me.
Also there isn't just one temperature in my home -- there's a whole range of fluctuating temperatures. So precise measurement would be non-trivial.
For subjective issue there's no point in going to great measurement effort. When I'm cold I either turn up the thermostat, or (more likely) toss on a sweater or some little fuzzy booties.
Now, it is the case that I might do some measurements when I'm troubleshooting a line. I'd certainly expect a telco technician to do so if I was complaining about line quality --- and I might do so to "motivate" a telco (or to file my complaint with a public utility commission).
If course if I was really serious about a line quality issue I'd rent an oscilloscope and look through a Fluke (TM) catalog to find a chart-strip recorder for the job.
So these numbers aren't completely useless in all contexts. However, the number of people who can make reasonable use of them is fairly limited.
How can you tell the connection speed that a modem auto-negotiates when dialing an ISP? My system log (/var/log/messages in RH5.1) does tell me the line speed I have set in the chat script, but I would like to know the connect speed as well (56K, 33.6, etc). I know this info must be available somewhere/somehow.
Modern modems don't just auto-negotiate at the beginning of a call. The "retrain" (negotiate speed changes) throughout a call's duration.
You'd "like" to know. Put what would you do with this? Order you phone company to pull new copper to your house? Return your modem and try another one? Switch to tachyon beams?
As for "this info must be available" --- look in the programming manual for your modem, if you can find one. It used to be standard for modems to come with a pretty decent technical manual --- providing details on a couple hundred AT commands that comprised that manufacturer's extensions beyond the based "Hayes" set for that particular model. These days you'll be lucky if you pry a dinky little summary of a tiny subset of their diagnostics and commands out of most of these manufacturers.
I don't know how to really get the connect speed but that might be not so important. Since I have a leased line to the Internet with a modem it is important for me to know how fast my line is running because sometimes this Line might have a lot of noise on it and the connect might be only 4800 bps instead of 33600 bps. In this case I have to call my Telecommunications provider to check the line !!!
If your connection varies by this much you'll know it. You won't need any numbers to tell you that it's *slow*.
If you are trying to serve your customers (web site users, whatever) over this line --- it does make sense to monitor the bandwidth and latency. However these are higher level networking issues that are only marginally related to the underlying connection speed.
But its easy to detect just send a ping to the other side when the line has low traffic. I do this by sending approx 20 pings and then look at the (lowest) roundtrip time. You can send a ping containing 8192 or 16384 bytes of data and you will detect nearly every change in bandwith.
Aha! Now this is a totally different metric. You aren't measuring "modem connection speed" you're measuring latency and throughput! Doing this once every two or three hours with about 5 pings and setting "watcher" to monitor those and warn you when the latency get's too high or the throughput gets too low would make sense --- for a dedicated line where you are concerned that you customers/users are waiting "too long" for their traffic.
There is a package called MRTG --- the "multi-router traffic grapher" which can be used to create web page graphs of your network traffic statistics over time. It seems to be primarily used by larger sites for ethernet. However it might have some facilities for monitoring PPP (even SLIP) lines.
Actually MRTG depends on SNMP so I should say that you might figure out how to configure the CMU SNMP agents for Linux to interface to your serial interfaces --- and then MRTG could provide the graphs. However, you don't technically need to run SNMP under MRTG --- their docs suggest that you can use various commands to provide it statistics to graph.
You can read more about MRTG at:
http://ee-staff.ethz.ch/~oetiker/webtools/mrtg/mrtg.html
Best Regards Michael Schwab
From Michael Schwab on 30 Sep 1998
OK the Intention of the Mail send by me was mainly to give a short help on what might be a suitable answer on the quistion posed in Linux Gazette. Despite of this THANKS FOR YOUR VERY LONG MAIL, now it's much clearer what you were saying and why. I agree with you when you say the connect speed is unimportant because it changes anyway during the connectiontime. Sinec the guy who send the question to you said that he connects to his ISP so I suggested that might monitoring bandwith ans latency also might be more meaningfull for him than just getting the connect speed !
Anyway thanks for your Answer .....
see you soon
Michael Schwab
From Jason Joyce on 07 Oct 1998
How can you log a telnet session using it from an xterm in Linux? I need to create a log of my actions during a telnet session, and I know that you can do it using telnet under Microsoft. And I know that if those guys have it, then they must have copied it from somewhere, and so I believe that it is possible using Linux, but I can't find any way.
Thanks for any help, Jason
You can run the 'script' command (which creates a "transcript" named "typescript" by default.
You can also run the 'screen' utility, which, among many other features, allows you to take open multiple screen sessions through one virtual console, telnet, xterm, or even dial-up VT100 sessions and dumb terminals. Think of having all the power of your virtual consoles from any sort of terminal/shell session. You can do screen snapshots, open and close log files, view a backscroll buffer (with 'vi' like search features), mark and paste text (keyboard driven), do a screen lock, and even detach the whole screen session from your current terminal, xterm or whatever (and re-attach to it from any login, from that or any other terminal, later).
I routinely run 'screen' for all my sessions. When I log into one of my ISP shell accounts I prefer to run 'screen' at the far end because it will auto-detach if my modem disconnects me. So, I can redial, re-attach and resume my work. I can also dial into my home system, do a 'kill -HUP' on my screen process (actually a 'screen -d -R' will auto located, HUP, and re-attach all at once) and continue working on all ten of the interactive programs that I had running at the time.
There are other ways you can do this. There was a sample script in 'expect' that did this in about 10 lines of TCL/expect code.
You can also use Kermit (ckermit, from Columbia University). This is a communications package, file transfer package and network client. I wrote an article for SysAdmin Magazine about a year ago to describe its use as a telent/rlogin client.
In addition to be fully scriptable and supporting the same file transfers over TCP/IP as it does over any serial connection; it's also possible to do logging and exentisive debugging using Kermit.
The next version (currently still in beta?) should support Kerberos authentication and encryption (one of several enhancements that I beat up on Frank de la Cruz --- it's principal author and co-ordinator --- about while researching my article).
So, there's about four options off the top of my head.
From Nathan Balakrishnan on 23 Oct 1998
Hello,
Do you know wheather YAHAMA OPL3-SAx WDM soundcard is directly supported by Redhat 5.0 and how would I go about setting it up under Linux if it isn't?
NATHAN
The best source of information on this subject is probably the "The Linux Sound HOWTO" (http://www.ssc.com/linux/LDP/HOWTO/Sound-HOWTO.html) by Jeff Tranter <tranter@pobox.com>.
I think most of the kernel sound support was originally donated by Hannu Savolainen <hannu@voxware.pp.fi> of 4 Front Technology (http://www.4Front-Tech.com/oss.html) which also sells their "Open Sound System" package (for about $20 US (presumably)).
The version that's included with Linux is known as OSS/Free while OSS/Linux is 4 Front's commercial version. They also support sound drivers on numerous other versions of Unix.
I guess there is an independent "Linux Ultra Sound Project" (http://home.pf.jcu.cz/~perex/ultra) which a list of or "Advanced Linux Sound Architecture" which includes your model on their list of supported cards (http://alsa.jcu.cz/src/soundcards.html).
So, try reading those and see if that helps. I personally have almost never used any sound cards. My computers make enough noise with all those fans and disk drives.
From Ralf Schiemann on 23 Oct 1998
Hi, I've a problem with backing up our file server (Linux 2.0.33). Attached to the server is a HP C1557A SCSI TapeLoader (6 x 24 GB). Actions on the loader are done without any problems (e.g. loading and unloading of tapes).
But if I try to do a backup via tar (tar cvf /dev/st0 /tmp) the tape display is telling me "Write" and after a short while "Tape Position lost". In /var/log/messages I find the following errors:
kernel: st0: Error with sense data: extra data not valid Deferred error st09:00: sense key Medium Error kernel: Additional sense indicates Sequential positioning error kernel: st0: Error on write filemark.
Can you tell me whats going wrong??
Any help is welcome,
Ralf
I would look at SCSI termination and cabling problems. It sounds like commands are getting through the interface just fine, and streams of data are causing the problem.
You don't say what kernel version nor which SCSI adapter and drivers you're using. If this is a modular kernel, try compiling the appropriate SCSI driver directly into it (to eliminate any chance of anomalies with automatic loading and removal of the SCSI drivers by kerneld, etc).
Try compiling a very minimal kernel with just the drivers you need to do your backup. You want to see if there's some strange conflict between your drivers.
Finally you might try testing this with a different SCSI adapter, with no other peripherals on it and the best cable you can buy. Is this an internal or external unit? I'm guessing external since DAT autochangers are pretty big for internal drive bays).
If you can afford it, it's best to put your SCSI tape drive on a separate SCSI card (a fairly cheap $60 ISA card is fine). This allows you to put the tape drive off that system without having to reboot, and it maximizes performance on the other bus.
From Steve Snyder on 20 Sep 1998
Is there a validation test suite for glibc v2.0.x? I mean a more comprehensive set of tests than are run by "make check" after building the runtime libraries.
Not that I know of. I guess the conventional wisdom is that if I install glibc, and a bunch of sources and I compile the sources against the libc and run all of them --- that failures will somehow be "obvious."
Personally I think that this is stupid. Obviously it mostly works for most of us most of the time. However, it would be nice to have full test and regression suites that exercise a large range of functions for each package --- and to include these (and the input and reference data sets) in the sources.
It would also be nice if every one of them included a "fuzz" script (calling the program with random combinations of available options, switches and inputs --- particularly with a list of the available options). This could test the programs for robustness in the face of errors and might even find some buffer overflows other bugs.
However, I'm not a programmer. I used to do some quality assurance --- and that whole segment of the market seems to be in a sad state. I learned (when I studied programming) that the documentation and the test suite should be developed as part of the project. User and programmer documentation should lead the coding (with QA cycles and user reviews of the proposed user interfaces, command sets, etc, prior to coding.
The "whitebox" test suites should be developed incrementally as parts of the code are delivered (if I write a function that will be used in the project, some QA whiteboxer should write a small specialized program that calls this function with a range of valid and invalid inputs and tests the function's behaviour against a suite that just applies to it).
Recently I decided that md5sum really needs an option to read filenames from stdin. I want do write some scripts that essentially do:
'find .... -print0 | md5sum -0f '
... kind of like 'cpio' Actually I really want to do:
'rpm -qal | regular.files | md5sum -f'
... to generate some relatively large checksum files for later use with the '-c' option. This 'rpm' command will "Query All packages for a List of all files. The regular.files filter is just a short shell script that does:
#!/bin/sh ## This uses the test command to filter out filenames that ## refer to anything other than regular files (directories, ## Unix domain sockets, device nodes, FIFO/named pipes, etc) while read i ; do [ -f "$i" ] && echo "$i" done
So I grabbed the textutils sources, created a few sets of md5sum files from my local files (using 'xargs'). Those are my test data sets.
Then I got into md5sum.c, added the command options, cut and pasted some parts of the existing functions into a new function, and what able to get it cleanly compiling in a couple hours. I said I'm not a programmer didn't I. I think a decent programmer could have done this in about an hour.
Then I ran several tests. I ran the "make check" tests, and used the new version's -c to check my test sets. I then used same script that generated those to generate a new set using the new version/binary. I then compared those (using 'cmp' and 'diff') and checked them with the old version. Then I generated new ones (with the new switch I'd added, and again with the old version) and cross check them again.
This new version allows you to to use stdin or pass a filename which contains a list of files to checksum --- it uses the --filelist long argument as well as the -f short form; and you can use -f - or just -f to use stdin. I didn't implement the -0 (--null) option --- but I did put in the placeholders in the code where it could be done.
The point here is that I had a test suite that was longer than the code. I also spent more time testing and documenting (writing a note to Ulrich Drepper, the original author of this package to offer the patches to him) than I did on coding.
Though a benchmarking component would be nice, my main concern is to verify that all (or at least the vast majority) of the library function work correctly. What I want to know is, given a specific compiler and a specific version of glibc source files, how can I verify that the libraries built are reliable?
By testing them. Unfortunately, that may mean that you'll have to write your own test suites. You may have to start a GNU/new project to create test suites.
It is likely that most of the developers and maintainers of these packages have test suites that they run before they post their new versions. It would be nice if they posted the test suites as part of the source package --- and opened the testing part of the project to the open development model.
In addition these test suites and harnesses (the scripts to create isolated and sample directory structures, etc) to run a program (or library) through its paces) would serve as a great addition to the documentation.
I find 'man' pages to be incredibly dense. They are find if you know enough about the package that you are just looking for a specific feature, that you think might be there, or one that you know is in there somewhere --- but you don't remember the switch or the syntax. However, a test harness, script, and set of associated inputs, outputs, and configurations files would give plenty of examples of how the bloody thing is supposed to work. I often have to hunt for examples --- this would help.
The specific version I want to test is the glibc v2.0.7 that comes with RH Linux v5.1 and updated after 5.1 release by package glibc-2.0.7-19.src.rpm. I think that such a testsuite, though, if it exists, would be applicable to any platform.
I agree. I just wish I could really co-ordinate such a project. I think this is another example where our academic communities could really help. Before I've said that I would like to see the "adopt a 'man' page project" --- where college and university professors even high school teachers from around the world assign a job to their students:
Find a command or package for Linux, FreeBSD, etc. Read the man pages and other docs. Find one way that the command is used or useful that is not listed the "examples" section of that man page. Write a canonical example of that command variant.
... they would get graded on their work --- and any A's would be encouraged (solely at their option) to submit the recommended example as a patch to the maintainer of the package.
Similar assigments would be given for system calls, library functions, etc (as appropriate to the various classes and class segments).
Along with this, we could have a process by which students are encouraged to find bugs in existing real world software --- write test suites and scripts to test for the recurrence of these bugs in future versions (regressions), and submit the tests to that package's maintainer.
The problem with all of this is that testing is not glamorous. It is boring for most people. Everyone knows Richard M. Stallman's and Linus Torvalds' names --- but fewer people remember the names of the other programmers that they work with and no one know who contributed "just the testing."
There are methods that can be used to many detect bugs quicker and more reliably than by waiting until users "bump into" them. These won't be comprehensive. They won't catch "all" of the bugs. However, people will "bump" into enough bugs in normal usage, even if we employ the best principles of QA practice across the board.
Unfortunately I don't have the time to really devote to such a project. I devote most of my "free" time to the tech support department. I do have spare machine cycles. could gladly devote time to running these tests and reporting results. Obviously some tests require whole networks, preferably disconnected ones, on which to run safely. Setting up such test beds, and designing tests that return meaningful results is difficult work.
I personally think that good test harnesses are often harder to design than the programs that they are designed to test.
Thank you.
***** Steve Snyder *****
From prince20 on 14 Sep 1998
Hi
My Favorites Folder was converted to a shell file after I reinstalled Windows95 and Internet Explorer 4.01SP.
What is a "shell file"?
Yeah you guessed it I did not back up the folder. The problem I have is that I can not open the shell file. I have used every method I know but nothing is happening.
Do you know of a tool or a way to open the shell file? Please Email me. Your help is appreciated.
I'd look at it in 'vi' if it was on my system. However, that probably isn't very helpful.
Thank You
Where did you get this address and why did you mail this question to me? I volunteer time to answer Linux questions. I don't run Win 95. Microsoft and other commercial software companies are supposed to have their own tech support departments. If those sources of support are failing you --- perhaps you should reconsider your software purchases.
From Zeki67@aol.com on 08 Oct 1998
I have been trying to connect my brothers PC in Louisville with mine in Atlanta using his Win95 dial-up as a client and mine as nt4.0 ras server.
I'm the "Linux Gazette Answer Guy" --- call Microsoft's tech support with our questions about their OS.
If you'd like to try Linux at either end of this connection --- be sure to look through some of our HOWTO's and guides at: http://sunsite.unc.edu/LDP (Linux Documentation Project).
We have tried with different protocols, and our workgroups, user names and p/words matching but with no success.
He can dial from win95, but mine does not respond at all. So I thought my modem which is not listed in misrosoft's HCL, is CPI Viva 33.6 CommCenter. I know for sure that my modem could not automaically answer the call under nt4.0, because when I set up my server as Win95 Dial-up server the modem answered and we made the connection. I even tried to edit the modem log for my modem type incase if it works, but I didn't know how to edit the log.
Is there any method you can think of how to solve this problem. I want to use my nt4.0 RAS to connect to Win95 Dial-Up client. Please help me.
I want to stop getting questions for some OS that I don't run, a derivative of one that I abandonned a half decade ago. Please help me. Call Microsoft (or hire an MCSE, or try your modem vendor).
Thank you. Zeki
[ All of you folks interested in MS-Windows rather than the Linux environment might find http://www.fixwindows.com/ handy; it's run by MCSE's, so I suppose in a worst case, you know where to hire someone. But before you go that far, they have a vendor phone listing, and some hints for effective troubleshooting. There's also a newsgroup heirarchy for your environment.
If you are considering switching and you like experimenting, you might help out the WINE Prohect at http://www.winehq.com/, run a copy of WinOS/2 under Dosemu (http://www.suse.com/dosemu/), or try any of the growing number of major applications available in both environments. -- Heather ]
From garygonegolfing@juno.com on 14 Oct 1998
Hello, Answerguy,
I found you on the web. Your name simply dictates that I must ask you a question:
A user has a Dell Laptop running Windows 95, Office97, and Outlook 98. Apparently, he has acquired some sort of virus (I'm assuming here) because when he opens Outlook 98 (Exchange 5.5) and sends and email (replies or writes a new message) three windows automatically open and the cursor continuously types a character until he hits the spacebar. This happens when he opens a Word document and an Excel document, too.
You only know part of the story. My full "name" is "The Linux Gazette Answer Guy" (tag).
So, I answer LINUX questions.
However....
Background:
I've run McAfee 3.2 (with latest DAT files) and found no trace of viruses (clean boot, et al.). This laptop was sent back to Dell and they (supposedly) Fdisked it and reinstalled the OS. Worked for a while, but IT'S BAAAACK. Definitely sounds like some sort of file infection, but I'm at my witt's end. I've scanned all files on the network and found one Macro-infected virus (cleaned).
Any information or insight that you can provide would be welcome.
Thanks for your time, AG.
Gary Hollifield
MIS Manager
FOCUS Enhancements, Inc.
NOTE: Please reply to all (I would like to get this at work, too). Thanks again.
As it happens I used to work for McAfee (as a Unix sysadmin, and their BBS SysOp). I also did some QA on SCAN.
While the behaviour you describe is suspicious, we can't definitely say that it is a virus solely from the symptoms you describe.
I would wipe the system personally (don't send it off to the chop shop, do it yourself). Leave it completely off of the network for a few days (at least twice as long as it seemed to take for the problem to appear on the prevous occasions).
Install all software, OS, Office, etc from the orginal CD's. Manually disable the "boot from floppy" options in the CMOS setup and the "autoexecute macro" features from WinWord and Excel. Manually inspect all documents that go onto the system (and limit yourself to short documents).
It could be some strange compatibility problem. If you don't see this happening on any other systems in your network, and with which this system as been sharing files, floppies and users, than it's not a virus (it's not spreading!).
Other than that, I'd consider putting Linux on it, and running that. Although there as been one "virus" for Linux (Bliss, a piece of sample code that actually managed to honestly infect a couple of users), they are simply not a problem for Linux, FreeBSD, or other Unix users.
From Andy Faulkner on 28 Sep 1998
Dear answerguy.
I have been trying for the last several hours that when I first started it sounded simple.
I am trying to launch a xdm session on my linux box from a another linux box on the same network.
I have tried to use "chooser" but it brings up no listed hosts. Also when I fire up chooser I see nothing going across the network. I hate to say this but with my Winblows box I can do it with reflectionX. I am running S.u.S.E. 5.2 and the other machine is running 5.3. We are both using KDE and also using kdm instead of xdm. We have tried both, and both had the same results. It looks as though I am not sending the request out to the host for a xdm session.
I can't seem to find any docs on "chooser" or on how to launch a session on a linux box.
What do you think? Andy Faulkner
I think 'chooser' (/usr/X11R6/lib/X11/xdm/chooser) is normally run by 'xdm' --- probably with some special environment variables and parameters. --- I don't think it's feasible to run 'chooser' by itself. (That would be a good reason to put it in the "lib" directory rather than under a bin directory --- and would explain why it has no man page of its own.
(I'll grant that this seems like an odd way to do it --- since making 'chooser' a shared library would make more sense from a classical programming point of view. In any event I don't know how you'd use 'chooser' directly).
Remote execution of X sessions via xdm is a bit confusing. Under S.u.S.E. they have a script /sbin/init.d/rx which can be used with their /etc/rc.config variable settings to run the xdm and allow access via XDMCP (the X display manager control protocol).
To remotely access systems running these display managers you have to run your X server with a command such as:
X -broadcast
(connect to the "first" --- or only --- available xdm server).
Alternatively you can specify the server you want to connect to with a command like:
X -query $HOST
--- which will require the host name or IP address.
To use the chooser you have to run a command like:
X -indirect $HOST
... this will cause the xdm on the named host to provide a list of known xdm hosts (possibly including itself).
In any of these cases the 'xdm' process must already be running on the remote host. That host need not be running any X server! (I realize the terminology is confusing --- more on that later).
To quote the xdm man page:
When xdm receives an Indirect query via XDMCP, it can run a chooser process to perform an XDMCP BroadcastQuery (or an XDMCP Query to specified hosts) on behalf of the display and offer a menu of possible hosts that offer XDMCP display management. This feature is useful with X terminals that do not offer a host menu themselves.
(it's also possible to configure the list manually and to configure it so that some BroadcastQuery replies are ignored).
I have no 'kdm' incorporates all of these functions are not. You should look through their man page for details.
I'm also a bit unclear on how you'd run xdm such that it would not run a local display server. I know it's possible, but I'm not sure how. (In other words, if you want to run 'kdm' on your console and 'xdm' for the remote X servers).
I realize the terminology is a bit confusing here. We have "xdm servers" running on one machine, and X servers (the X Windows display server --- the think that actually controls your video card) running on other machines. Note that the X display server controls your video card and acts as a communications package between your display (keyboard, video, and mouse) and the programs that are running under X (the "clients" to your display server).
Thus 'xdm' is a "client" to your X display server regardless of whether that 'xdm' process is running on you localhost or on another machine on the network.
To complicate issues even further the 'xdm' "indirect" option first connects your X server to a one client --- then, based on the selection you make from the chooser, it restarts your X server with instructions on connecting to another 'xdm' host.
In the end, when you connect to a host via 'xdm', you log into and it is as though you were running an X session at that system's console. All of the windows you open will be processes running on the 'xdm' host. So you can think of 'xdm' as a combined 'getty' and 'telnetd' and 'login' for the X protocol.
There are a variety of shell scripts under /usr/X11R6/lib/X11/xdm that control how the console is "taken" (back from a user that logs out) "given" (to a user that logs in), "setup" (prior to xdm's display of the "xlogin" widget), "started" (as 'root' but after login) and how the "session" is started (under the user's UID).
You'll want to read the xdm man page and all of the scripts and resource files in the xdm directory to adjust these things. (It just amazes me how complicated all that vaunted "flexibility" and all those customization options can make something as seemingly simple as: provide me with a GUI login).
Anyway, I hope that helps.
"Linux Gazette...making Linux just a little more fun!"
Internet is the big and wide world of infomation, which is really great, but when one works on a limited Internet access, retrieving big amounts of data may become a nigthmare. This is my particular case. I work at the National Astronomic Observatory, Universidad Nacional de Colombia. Its ethernet LAN is attached to a main ATM university's network backbone. However, the external connection to the world goes through a 64K bandwidth channel and that's a real problem when more than 500 users surf the net on day time, for Internet velocity becomes offendly slow. Matter change when night comes up and there is nobody at the campus, and so the transmition rate grows to acceptable levels. Then, one can downloading easily big quantities of information (for example, a whole Linux distribution). But as we are mortal human beings, it's not very advisable to keep awake all nights working at the computer. Then a solution arises: Program the computer so it works when we sleep. Now the question is: How to program a Linux box to do that? This is the reason I wrote this article.
On this writting I cover the concerning about ftp connections. I have not worked yet on http connections, if you did so, please tell me.
At first sight, a solution comes up intuitively: Use the at command to program an action at a given time. Let's remember how looks a simple ftp session (in bold are the commands entered by user):
bash$ ftp anyserver.anywhere.net Connected to anyserver.anywhere.net. 220 anyserver FTP server (Version wu-2.4(1) Tue Aug 8 15:50:43 CDT 1995) ready. Name (anyserver:theuser): anonymous 331 Guest login ok, send your complete e-mail address as password. Password:(an e-mail address) 230 Guest login ok, access restrictions apply. Remote system type is UNIX. Using binary mode to transfer files. ftp> cd pub ftp> bin ftp> get anyfile.tar.gz 150 Opening BINARY mode data connection for anyfile.tar.gz (3217 bytes). 226 Transfer complete. 3217 bytes received in 0.0402 secs (78 Kbytes/sec) ftp> bye 221 Goodbye. bash$
You can write a small shell script containing the steps that at will execute. To manage the internal ftp commands into a shell script you can use a shell syntax feature that permits to embed data that supposely would come from the standard input into a script. This is called a "here" document:
#! /bin/sh echo This will use a "here" document to embed ftp commands in this script # Begin of "here" document ftp <<** open anyserver.anywhere.net anonymous mymail@mailserver.net cd pub bin get anyfile.tar.gz bye ** # End of "here" document echo ftp transfer ended.
Note that all the data between the ** strings are sended to the ftp program as if it has been written by the user. So this script would open a ftp connection to anyserver.anynet.net, loging as anonymous with mymail@mailserver.net as password, will retrieve the anyfile.tar.gz file located at the pub directory using binary transfer mode. Theoretically this script looks okay, but on practice it won't work. Why? Because the ftp program does not accept the username and password via a "here" document; so in this case ftp will react with an "Invalid comand" to anonymous and mymail@mailserver.net . Obviously the ftp server will reject when no login and password data are sended.
The tip to this lies in a hidden file that ftp uses called ~/.netrc ; this must be located on the home directory. This file contains the information required by ftp to login into a system, organized in tree text lines:
machine anyserver.anynet.net login anonymous password mymail@mailserver.net
In the case for private ftp connections, the password field must have the concerning account password, instead an email as for anonymous ftp. This may open a security hole, for this reason ftp will require that the ~/.netrc file lacks of read, write, and execute permission to everybody, except the user. This is done easily using the chmod command:
chmod go-rwx .netrc
Now, our shell script will look so:
#! /bin/sh echo This will use a "here" document to embed ftp commands in this script # Begin of "here" document ftp <<** open anyserver.anywhere.net cd pub bin get anyfile.tar.gz bye ** # End of "here" document echo ftp transfer ended.
ftp will extract the login and passwd information fron ~/.netrc and will realize the connection. Say we called this script getdata (and made it executable with chmod ugo+x getdata), we can program its execution at a given time so:
bash$ at 1:00 am
getdata
(control-D)
Job 70 will be executed using /bin/sh
bash$
When you return at the morning, the requested data will be on your computer!
Another useful way to use this script is:
bash$ nohup getdata & [2] 131 bash$ nohup: appending output to 'nohup.out' bash$
nohup permits that the process it executes (in this case getdata) keeps runnig in spite of the user logouts. So you can work in anything else while in the background a set of files are retrieved, or make logout without kill the ftp children process.
In short you may follow these steps:
And voilá.
Additionally you can add more features to the script, so it automatically manages the updating of the ~/.netrc file and generates a log information file showing the time used:
#!/bin/sh # Makes a backup of the old ~/.netrc file cp $HOME/.netrc $HOME/netrc.bak # Configures a new ~/.netrc rm $HOME/.netrc echo machine anyserver.anywhere.net > $HOME/.netrc echo login anonymous >> $HOME/.netrc echo password mymail@mailserver.net >> $HOME/.netrc chmod go-rwx $HOME/.netrc echo scriptname log file > scriptname.log echo Begin conection at: >> scriptname.log date >> scriptname.log ftp -i<<** open anyserver.anywhere.net bin cd pub get afile.tar.gz get bfile.tar.gz bye ** echo End conection at: >> scriptname.log date >> scriptname.log # End of scriptname script
To create by hand such script each time we need to download data may be annoying. For this reason I have wrote a small tcl/tk8.0 application to generate a script under our needs.
You can find detailed information about the ftp command set in its respective man page.
by Ron Jachim and Howard Cokl
of the
Barbara Ann Karmanos Cancer Institute
Introduction
The advantages of having a CD-ROM jukebox should be readily apparent in a networked environment. You can provide multiple CD-ROMs to multiple people. Granted, in an ideal environment, you would want the throughput of SCSI CD-ROM drives. There are also disadvantages to SCSI drives. They are more expensive and harder to configure. A cheaper alternative is to use a bunch of IDE CD-ROM drives. Many people even have slower ones lying around because they just had to have a faster one. What you need: I assume that you can assemble all of the parts necessary. You may have to call around and ask about an appropriate case -- order it with a power supply as they sometimes use unusual ones. JDR does not show one in their catalog, but we got ours from JDR. The most unusual component is the IDE controller which we have describe below, and it is not even that unusual. IDE Controller Description The only key to this server is that you can have up to four IDE channels, each capable of supporting two drives. The controller must be ATAPI compliant to support IDE CD-ROM drive. Assuming you use a single IDE hard disk for booting, that leaves up to seven connection points for additional IDE devices, in this case CD-ROM drives. An appropriate controller is the Enhanced IDE controller card, part number MCT-ILBA from JDR Microdevices (www.jdr.com) which lists at $69.99. Many motherboards are capable of supporting one or two IDE channels, so configuration instructions vary slightly. The rest of the discussion assumes you want a maximal configuration. No Channels on the Motherboard (two IDE controller cards required) Configure the first controller so that its first channel is the primary controller and the second channel is the secondary controller. The controller card should have a BIOS address and you will need to make sure it does not conflict with any other BIOS address ranges already in use (or on the other IDE controller card). Configure the second controller so that its first channel is the tertiary (third) controller and the second channel is the quaternary (fourth) controller. Note your IRQ and I/O Address range for all channels. Remember, you cannot share the IRQs, I/O address ranges, or BIOS address ranges. 1 Channel on Motherboard (two IDE controller cards required) If the motherboard has one IDE channel, it will support two IDE drives. Configure the channel as primary. You probably have no choice in this, but if you do, choose primary. Configure the first IDE controller card so that its first channel is the secondary controller and disable the second channel. The controller card should have a BIOS address and you will need to make sure it does not conflict with any other BIOS address ranges already in use (or on the other IDE controller card). Configure the second IDE controller card so that its first channel is the tertiary controller and the second channel is the quaternary controller. Note your IRQ and I/O Address range for all channels. Remember, you cannot share the IRQs, I/O address ranges, or BIOS address ranges. 2 Channels on Motherboard (one IDE controller card required) If the motherboard has two IDE channels, it will support four IDE drives. Configure the first channel as the primary controller and the second channel as the secondary controller. Configure the IDE controller card so that its first channel is the tertiary controller and the second channel is the quaternary controller. The controller card should have a BIOS address and you'll need to make sure it does not conflict with any other BIOS address ranges already in use (or on the other IDE controller card). Note your IRQ and I/O Address range for all channels. Remember, you cannot share the IRQs, I/O address ranges, or BIOS address ranges. Table of Common IDE Information
# |
Channel |
IRQ |
I/O Address* |
|
0 |
Primary |
14 |
1F0-1F8 |
|
1 |
Secondary |
15 |
170-178 |
|
2 |
Tertiary |
11 |
1E8-1EF |
|
3 |
Quaternary |
10 |
168-16F |
* Note: the documentation with our card was incorrect.
Software Installation
Once you have configured the hardware and noted all settings, you are nearly done.
Start the Slackware installation with the bootdisk. A normal Linux installation has two IDE channels configured, so you only need to configure in the other two channels. At the "boot:" prompt specify the additional IDE channels using kernel "command line" options. For example,
boot: ide2=0x1e8,0x1ef,11 ide3=0x168,0x16f,10
As you can see, the third IDE channel (ide2) uses I/O addresses in the range 1E8-1EF and IRQ 11. The fourth IDE channel (ide3) uses I/O addresses in the range 168-16F and IRQ 10.
After completion of the Slackware install it is simply a matter of either exporting the drives for NFS mounting or configuring Samba and sharing the drives.
Next Step
The next thing we would like to do is configure the CD-ROM server with 8 CD-ROM drives and no hard disk. We feel it is a technically elegant solution to have the boot disk be a custom-burned CD-ROM and use BOOTP or DHCP to handle the network configuration. A possible alternative is to use a solid state drive for boot purposes.
Other people did point out that this topic has been around for a while.
Indeed, through the AltaVista search engine I found pointers to discussions
that occurred about setting up a Linux certification program back in 1996.
The issue now is that the momentum of certification within the IT industry
just keeps increasing and the responses to my article make me only that
much more sure that we need to move now to make sure that we build
a unified Linux certification program that we all can get behind
and promote with the same energy and enthusiasm that Microsoft promotes
the MCSE and Novell promotes the CNE.
The biggest single item that can kill a Linux certification program is if we in the Linux community wind up with 4 or 5 different separate programs! (Do I hear the UNIX wars again?) There is strength in numbers - can we build a common program? Please join me on the mailing list and let's see if we can give it a shot! Previous Article
This article evolved from my frustration in developing and debugging CGI programs written for the AuctionBot, a multi-purpose auction server. I found that the available C libraries, C++ class libraries and web server extensions did not fit my needs so I decided to implement a different approach based on debugging over TCP sockets. Using this approach, I have successfully implemented and debugged most of the CGI's for this project.
My development machine at work is a Sun Ultra 1 running Solaris 2.5. At home I have a Linux box running RedHat 5.0. I developed all my debugging code under Linux. Linux provides a very stable development environment that allowed me to develop, test and experiment locally, without requiring remote login to the Sun. Once the code was running, I simply moved it over to the Sun, built it and started using it.
A common CGI debugging technique involves capturing the
environment that a CGI is run under (usually to a disk file),
restoring the environment on a local machine and running the CGI
locally within the restored environment. Using this technique,
CGI's can be run from the command-line, or from within a debugger
(gdb
for example) and debugged using familiar debugging
techniques. This technique is straight forward, but requires the developer
to perform the extra work of capturing and restoring the CGI runtime
environment.
Another problem in debugging CGI's is viewing the output of a CGI that fails to run correctly. If you are using Apache 1.2 or later, this can be addressed by configuring the web server to log error messages to a error log file. This approach works for some classes of problems, but does not provide the granularity I wanted.
One could write debugging/status information to log files and use
tail -f logfile
to view the file. This works, but can
produce deadlock conditions if multiple copies of your CGI are
running and they attempt to use the same shared resource (the log file) and
do not use file locking. Developers must provide file locking code
and handle possible deadlock conditions, including cases where a CGI
crashes before it releases its file lock [1]. In addition, all writes
must be atomic to ensure correct output.
Ideally, one would like to debug the CGI in its natural surroundings, i.e. from within environment created by the web server, without any extra setup work.
SocketDB
provides the required behavior
to debug CGI's over TCP sockets. The class supplies methods to
connect to the server and write strings over a TCP socket.
class SocketDB { private: int mSD; ErrorTypes mErrorType; int mConnected; public: SocketDB(); SocketDB(char *name, int port); ~SocketDB(); int Connected() { return mConnected; } int ErorType() { return mErrorType; } int Connect(char *name, int port); int Print(char *format,...); int Println(char *format,...); };To connect to the server use the
SocketDB
constructor passing
the server name and port, or use the Connect
method. Both
will
attempt to connect to the server on the specified port. Use the
Connected
method to determine if the connection was
successful or you can use the return value of Connect
.
The Connect
method returns 1 if connected, otherwise, 0.
If a connect error occurs, use the ErorType
method to get
error information. The file Socket.C
enumerates the
error types.
DebugClient
(see DebugClient.C) shows how to use
the class.
For simplicity, I designed this program to run from the command-line,
rather than a CGI program run by the web server. I choose this approach
so users could quickly run the program and see how the socket
debug class works. Integrating the class into a CGI is very straight
forward.
The program attempts to connect to the debug server program specified by the command-line arguments host and port (see source code). If it fails to connect, it prints a message, and the error code and exits. If it connects, it prints the test string, writes the same string over a TCP socket to debug server and reports the result of the debug server write.
DebugServer
(see DebugServer.C) implements an
example
debug server [2]. This program is a simple echo server that creates a
socket, binds to it and accepts connections from clients. Once it
gets a connection it forks off and handles the connection. In this
case it just reads a string and echoes it.
cd
to the
directory
containing the example programs and type DebugServer [port]
where
port is the port you want the server to listen on. For example, to run the
program
on port 4000, type DebugServer 4000
.
In another shell cd
to the directory containing the example
programs and type DebugClient [host] [port]
where host is the
host name
of the machine the server is running on (get this by typing
hostname
at the command prompt) and the port is the port were the server to
listening (4000 for example).
You should see a text string written to the server and to the shell.
Code: http://groucho.eecs.umich.edu/~omalley/software/socket_debug-1.0.tar.gz
Fall Internet World 98
A View From the Show Floor
By Stephen
Adler
I just experienced my first big league Internet show. And was it a doozer.... The show was titled 'Fall Internet Show 98' and it took place in New York City's Javits conference center. There was a 4 day 'vertical' track on TCP/IP which was one of the motivations for going to the show. The other was to meet the commercial Linux people in person. So what follows is a 'diary' of what I can remember of the show.
Day 1) I live on Long Island NY and I have to take a 1.2 hour train ride in order to reach the Javits convention center where the show is being held. My day starts by getting up at 5:45 am, taking a quick shower, and trying to get to the train station, a good 30 minutes from home, by 6:30am. This first day, I got a call from the experiment where I work, telling me that data cannot be collected. I'm the DAQ guy. I figured I would drive by work, fix the DAQ problem and continue on to the train station. The problem was minor, but I missed the 6:30 train I wanted to take and ended up on a later train. What was the rush? According to the Fall Internet World 1998 web page, the keynote speakers were to start on Monday at 9am and I still had to register. I wanted to get into Javits, register and get a good seat for the keynote. I was rushing to get to NYC. The train ride in was uneventful. The weather was fantastic. 70 odd degrees that day, clear blue fall skys. Getting into Pen Station and out onto the streets of NYC, on a bright clean crisp fall day is hard to explain. You have to experience it yourself. Javits is between 34th and 36 or 37th street and 11th Ave. Pen Station on about 8th and 33rd. So I take off west, down 34th searching for Javits. I've seen it from the outside a long time ago and I'm not really sure where to find it. Found it, hard to miss. And yes, there was some kind of computer show going on there. The front of the convention center had these large banners draped with some message from Hewlett Packard for all to see. There were some other banners draped in front of the building which I cannot recall now.
In I go expecting to see thousands and the place looks rather empty.
I peer into the show floor to find boxes and crates unopened all over the
place. "Gee", I think to myself, "They have a lot of work to do in order
to get setup for today". I go over and registers, there is no one in line.
And again I think to myself "This is weird, the place is dead". I was worried
that I would miss the key note address of John Gage, some
science guru working for Sun. Well, it turns out that the show is really
to get going on Wednesday. Ha, this explains all, I'm rushing around for
no purpose at all. The good thing was that the sessions I wanted to attend
did start today so waking up at 5:45am was not a complete waste of my time.
Now all I had to do was blow off 1 hour waiting for my session to start.
In the mean time, I went to get a cup of coffee from one of the show vendors.
I spent 5 bucks on an oversized coffee and muffin. The coffee these guys
sold me was so charged up, that I ended up running to the bathroom to pee
at every break in my session.
10am finally rolled around, I went to my session titled 'The Infrastructure of IP' or something like that and spent the rest of the day listening to a rather polished man, (polished in appearance,) telling me about IP. I knew about 70% of what he was telling me and was gland to learn of the other 30% of which I've heard of but never knew the details. (What exactly is a class A, B, C, D or E type network and the details of the why's and whereof's of DHCP, a rather new protocol to replace bootp, (new in that I've just heard of it when RedHat 5.1 was released)) The other stuff he covered I cant remember now. What I remember most of this session was that this guy reminded me of a tele-evangelist. First off, the guy wore a very nice suit. You can't blame him, its his job, and working in the private sector, you have to look good. He worked for a training company and this explained why, at least I assumed why, he presented his material as he did. His style was as follows. His presentation tool was power point, jazzed up with animations. The slide could not just pop up. The letters had to roll in, high light streaks had to streak in, the bullet items came in, rolling in one after another with a nice time delay between bullet items of several seconds. Very slick. He would present his material in a way which was supposed to make you feel good about what you were listening to. He kept asking questions, not for the sake of the question, but to get the audience involved. He would walk up and down the isle talking about IP headers, the OSI networking model and always interjecting, "Do you feel comfortable with that? Is it all coming together now?", all the while I'm getting this weird feeling that I need to yell "Amen, the TCP/IP God is great and forgiving".
This went on for the rest of the day. Sitting inside this small room listening to the wonders of IP. At one point, I decided I needed to get out and look around a bit to see what the rest of the conference attendees were engaged it. I poked my head into one room, about 4 times the size, full of people, talking Web marketing strategies. I mean it was full. This pointed out to me one fact about the internet. Very few know how it really works, and the rest are trying to cash in using browser technology.
Day 2) Since my session didn't start until 10am, I didn't rush to catch the train. Instead I took my wife to work, and then had to run and catch the last train which got into NY at 10am. Meaning I would miss the first 15 minutes of my session. That's OK. After sitting through about 6 hours of Tele-evangelism I figured I could miss the 1st 15 minutes of the "Integrating Unix and NT", or was it, "Making Unix talk to NT" or something to that effect. The idea being that you were to learn how to setup a heterogenous Unix/NT computing environment. I got the same guy from yesterday giving this session, great guy, but I couldn't take it anymore. He ended getting hung up on setting up a DHCP server on his laptop running NT. Hey, I can fill in a widget form thing with IP numbers too... I figured I had enough and that this time, I wouldn't learn much. I wanted to see what else was 'out there'. So I wondered over to the ISP session. There was an interesting talk on setting up VPN's. That was new to me. Virtual Private Network. I still don't understand why it's such a good thing. To me, it has a bit of a snake oil man's thing to it. Look, we can setup this 'tunnel' between sites on your enterprise. Its secure, it uses the internet, it drives costs down. And I'm thinking to myself, "Well, I've got secure shell on my PC at home, if I've got secure shell on my PC at work and I ssh between the two, I must have a VPN!". I'm pushing the forefront of internet technology without even know it. I guess VPN's are for those who don't have access to ssh. Hmmm... I paid $0 for ssh, I wonder what it costs to setup a VPN? Do the ISP's give it away? I wandered from the ISP session to the Telephony session. I learned about VPN's in that session too. Here, there was a slick woman from 3Com who had even slicker .ppt files to dazzle you with. These .ppt files were in full animation. Cartoons would pop up and disappear, text would flow, arrow and pointers swooshed. I hope I don't get a .ppt deficiency complex next time I present my all too static transparencies. (Transparencies.... (yes I do code in Fortran more often that I would like to admit. But I have written some c++ code, a crude as it was...))
Lunch came next. The day before I got a hotdog from a vendor across 11th avenue for $1.50. With soda it cost me less than $3.00. Today, I got into the cafeteria line, pulled a rather bland ham and cheese hero looking thing from a shelf, a bag of chips and a soda. $10.00!!!! I grunted as I pulled out a $10 bill from my wallet but the cashier didn't seem to care. (Its not her fault I'm stupid.) I wandered around the tables, found one where only one guy was sitting at a table which fit 4. I sat down and munched away. After some time, I got to talking to the guy. He was a chap from Israel with a T1 connection out of his house and a 45Mbit connection coming in! Talking about an asynchronous plug into the Internet. My god. This guy was into testing some on demand DVD video app into his house. We'll, I'll be waiting form my 45Mbit connection coming from Bell Atlantic soon. Yea, real soon. It took 9 months to get ISDN into my house after Bell Atlantic, when they swore up and down it would be 3 weeks tops. Using Adler's law of monopolistic trends in hi-tech, I give Bell Atlantic 20 years before I see 45Mbits into my house, even thought this guy has it *now*. (I'll have more sarcastic comments on this topic later...) Ok that was lunch. I decided to blow off the rest of the Unix/NT session. At this point I cant remember very well what I did. It's all getting rather blurry. I do remember the last session I went to on day 2 of the conference. It was titled "The governance of the Internet" and was a panel of a bunch of rich guys discussing how the government should not intervene with regulating what is deployed on the internet and how its deployed. The unfortunate part was that too much of the discussion centered on 'adult material' with eyes rolling up on each mention of that dirty subject.
Day 3) Finally, the first day of the real conference. I got up at 5:45 am, and rushed off to catch the train. The 7:05 express got me in about 8:30 which would be enough time to walk over to Javits and catch a good seat for the morning key note. The deliverance of this keynote really set the stage for the next two day's of this conference. The key note took place in the 'special events hall'. A large auditorium with a low sealing which could seat about 1000 people I estimated. The stage was setup with 4 projection size TV screens. (20 feet high by 30 feet long, I don't know if I have the aspect ratio correct there, but they were big) Above the speakers podium was another regular TV which must have been at least 48'' in size. The props which fit between these screens were black with fluorescent thin geometric design. (Predominantly orange fluorescent tones) As I walked in, some rather hyped rock and roll music was playing. Fast beat music. I'm glad I didn't have a cup of the coffee they served there in the Javits food stand because between the caffeine overdose they serve and the rock and roll, I would have shot out of my chair. So there I wait, rock and roll in the back ground, cool fluorescent stage props in front and tons of MecklerMedia adds on the TV screens, (All 5 monster screens of them). The music let up, the screens went blank and the show was about to begin. The first 2 or 3 minutes was dedicate to a rather glizy add of Sun Microsystems. More rock and roll, the 5 screens lit up with MTV style imagery dedicated to promoting Sun. After that, some rich guy, (member of the overclass), comes out and introduces himself. (Head of MecklerMedia, the sponsors of the show.) He eventually gets around to introducing the keynote speaker, John Gage. John, from what I can tell, has a science background. I would assume he has a Ph.D. in physics or something since he is the science liaison for Sun Microsystems. Being that I'm a scientist, I figured this would be a good chance to see what us science guys are doing to help internet technology. He gave a very good talk. In the end, he ended up promoting Sun's alternative to corba called jini. And no, its not in the public domain. John had some guy who seems to be involved in the development of jini come out and tell us what jini is and how it would affect the world. The appliance world that is. Jini was going to be, dare I call it the OS, which runs in your cam-corder, cell phone, PC, coffee pot, refrigerator, steering and breaking system in your car, landing gear in the next plane you fly, in the stop light at your closest busiest intersection, in the elevator in the World Trade center... Wait, is this a Really Stupid Idea!!!! This is nuts!!!! I don't want my car's breaking system to be on the Internet! No Way! It's going to be break-in city. All the hackers (I don't mean to give all hackers a bad name) who dedicate themselves to testing system IP vulnerabilities are going to have a field day. I am sure there will be a web page with the IPv8 address of my breaking system and the buffer overflow code which you can down load into this jini thing in my breaking system which will cause the breaking system to invert. Instead of pushing the peddle in my car to break, I'll have to push the break peddle to release the emergency breaks in my car. Good grief, I thought the year 2K freaks were crazy about the end of the world. Jini will end it all. After this jini guy finished talking about the object'ivity of this code, (you should have heard him rant. "This cam-corder is an object. Its got methods! The record method. The 'upload your data' method") all while he was staring intently at the cam-corder. It was if he was looking into and beyond the cam-corder into every appliance on the internet, including the breaking system of my car. John finished off his talk in a brilliant fashion. he pulled up the 'coolest taxi in Colorado' web page for us to see. Some guy, I can't remember where in Colorado, has wired up his cab to the internet. the interior of his cab is totally wacked out. Its got a full complement of musical instrument, drums, key board, amplifiers etc. as well as some digital camera which he used to take pictures which he uploads to his web site. Here check it out. Click here.
After that bit of excitement I decided to pace myself and go to some sessions before hitting the trade show floor. The problem is that I can't remember what sessions I went to. But I do know that I only went to one of them. Because it was after that I was soon on my way to checkout the RedHat booth. My main calling to this show was to meet the RedHat team. I wouldn't call my self a Linux fanatic, maybe just an enthusiast. And I've gone through about 50 installations of RedHat on one machine or another since I started using it in the spring of 1996. I've been following the growth of RedHat somewhat on a daily basis since then and I've seen that they tour the world, meeting LUG groups and what not. So, needless to say, I did have a peak of curiosity to meeting someone from RedHat in flesh and blood. My search for the RedHat booth was frustrated by the poor documentation provided by the show. I went to the first floor, looking for booth 3368 or something like that and found this empty booth space in the far back reaches of the first floor show area. I then found out that they were on the second floor. This was good since this was the main show area. Then I went to the second floor and wandered around looking for them. Again, the booth numbering is not quite random but close. I'm sure mathematicians have a name for its. Local Random Space, or local non-transforming functionals, who knows. I finally stumbled into them. There it was, the RedHat booth. I was expecting it to be mobbed by people, but it was not. It was rather empty. They had one or 2 PC's running RedHat Linux and the secure version of Apache. I went up and introduce myself to Melissa, the PR woman for RedHat, although she didn't want to refer to herself as a PR person. I guess there is some stigma attached to the PR departments of high tech companies which eludes me. Maybe is because I don't watch enough TV to see all the MS commercials. In any case, I told Melissa that I expected RedHat is going to get really big. I was curious to find out what was going on with the company. She told me that it was crazy right now. My guess is that the RedHat team is hacking late into the night. With the recent investment of Netscape, Intel and two venture capital firms, they are clearly booming. (I recently saw the announcement for two new positions at RedHat on their announcement mailing list.) As I stood round the booth, it was clear to me that people were continuously coming to the RedHat booth to ask questions. I was trying to stay out of their way. Or answer some questions for them if some people couldn't get to the RedHaters. After telling Melissa that I have a RedHat mirror site, she got excited and gave me a mouse pad and a poster. I hung around a bit more, found out that all the other Linux vendors were in the Oracle Partners pavilion. So I headed over there.
There I found the Pacific Hi Tech guy, the Caldera guy, the SuSE guys, and the VA guy. I spent some time with each. At that time, the VA guy was in a crisis situation. His PC had arrived broken. It was shaken up during shipping. Evesdropping in on the situation, it sounded like the disk drive was not properly screwed in to its bay and when the VA guy opened up the box, he found the system disk sitting on the bottom of the enclosure. After putting the disk back where it belonged, it wouldn't boot. At that time, there was some guy from RedHat, trying to figure out how to get it back up and running. It was tense. The RedHat guy had a bead of sweat coming down the side of his forehead while he franticly typed commands at the console trying to diagnose the problem. I've been in similar situations but not as bad has having my system dead on the show floor of a major Internet conference. Instead of standing around looking over his shoulder adding to his pressure, I told the guy good luck and took off for lunch. (I stopped by some time later, and the machine was running fine.)
Lunch. Two hotdogs and a soda. All for under $5. Much better. Thank you street vendor. (Hmm... I see parallels here between open and close source development and lunch with the street vendor and at the conference cafeteria.)
After lunch came the Oracle keynote given by the CEO of Oracle, Larry Ellison. The only time I've seen him before was on a very good documentary by this Cringly guy titled something like "The Revenge of the Nerds" which tracked historically the rise of the SV power houses along with MS. The pre Keynote show or add, was really intense. All 5 TV displays were in full swing throwing up graphics and images of Oracle and the Internet. The music was very loud and fast. The adrenaline rush was mounting. After about 5 minutes of this extremely intense pitch, the noise gave way to silence. Then someone from the audience shouted "LOUDER!". Everyone laughed. And out came the CEO of Oracle. I don't know if he caught that, but I would have been rather emberassed. So off he goes ranting and raving about the future of computing. He ragged as much as possible on Microsoft. (There was an article in the NYTimes which talked about NT servers in every Burger King or McDonnalds and he thought that was a bad idea.) He then went to describe the power of the internet and how his product was going to take advantage of it etc etc... Its hard to take so much promotion of someone's software. The one thing that irked me was that he was confusing the internet with the browser. He kept saying things like, "You can access our database on the Internet" and he popped up Netscape and ran through some demo. I have a feeling that either he figures that the regular joe mo user considers the browser as the Internet or he is a regular joe mo user who doesn't know the subtleties of what he was ranting and raving about. In any case, while he was stepping through his demo, which was running on an iMac, the app froze and there was a frantic rebooting of the machine. The Orcale guy was able to talk his way through the rebooting of the poor iMac. This is life at the bleeding edge. Even Larry Ellison as to bleed a little.
After the key note, I turned my attention to a session titled, "Getting the most out of the Mozilla source code." Cool, open source, finally something about the real future. The guy who talked impressed me. He was an African guy who waxed well about web page development. I was glad to see that the field of Internet technology was not completely dominated by males of protestant/european decent. The session that followed was by some guys from real.com (I think the name has changed) who talked about audio and video compression. The topic of the session being multimedia in your browser. The technical stuff they covered was good. I can now claim to be an expert in audio and video compression. I know the jargon words, compress, equalize, encode, decode, key frame, mpeg, and so on and so forth. With that, I can bullshit my way through any multi-media discussion.
I lost patience with the conference sessions and decided to go back
to the show floor, Instead of rushing off to the RedHat booth in mere panic,
I scouted out the various setups put up by all these forefront companies.
The companies who rented real estate from Javits was a who's who of my
life blood. HP, Sun, SGI, IBM, Motorola, Cicso, Microsoft, Bell Atlantic,
Computer Associates, O'Reilly, Oracle, Sybase, and on and on. The Big players
had Big booths and just as in the real world, the real estate proverb of
"Location, Location, Location" applies equally well here. All the companies
with big bucks were positioned right in front of the several entrances
to the main show floor. IBM bought the best spot, they were just behind
the main entrance. Microsoft had the second best spot, which was just to
the right of IBM. It's hard to describe the impression of some guy who
has never seen this kind of presentation before. Its BIG, Its LOUD, its
FLASHY, its CATCHY, its MTV, its exhausting. These Fortune 500 booths all
had big audio/visual displays advertising their merchandise. All screens
were BIG. Those cool Sony TV's where you put 9 or 16 of them together in
an NxM array and together they make up one big TV screen were all over
the place. IBM must have had 1/2 a dozen of these arrays setup. The detail
setup of all these booths has been lost from memory. Some exceptions linger.
First, is Motorola's Digital Diner.
Forget the elaborate array of video technology (gadgets), Motorola I think
out did everyone with their Digital Diner. As I strolled around the floor
trying to keep my mind from exploding from information overload, I saw
this diner looking structure with a bunch of people standing around rather
captivated by what was going on inside. I got a closer look and it took
a bit of focusing, (I'm brain is fighting these peak levels of information
infusion) and I realized that inside this diner, is a restaurant mockup
with a full Broadway cast singing and dancing to the hand full of show
attendees who caught a seat at one of the booths inside this Digital Diner.
They are sing and dancing to the Tune of IP Telephony no less. The cast
was a hoot. They had a cop, some sales guy, and 3 waitresses. And sing
and dance they did. From the outside of the booth, you could not hear the
music or what they were saying, but the visual of waitresses dancing around
with coffee pot and mug in hand, with those head held microphones was just
too cool. VIVA
New York City! (My guess is that the cast is from Pasadena and they tour
the country going from Internet show to ISPCon singing and dancing the
IP Telephony tune, but NYC is the center of the Universe for Broadway shows,
and seeing this kind of production in Javits was special. At least to
me....) Not
to be out done, the folks at Computer Associates had their own production
going. Their theme was Jazz, and the stage was a funkie bar/cabaret setting.
Here they had a couple with the familiar head held mics, dancing around
singing about CA solutions for your corporate enterprise. They were supported
by another couple with no head mics but just danced around. Again, a type
of 50's be-bop as was going on in the Digital Diner. Very entertaining.
Trying to compete with this kind of message delivery were other booths
of smaller, lesser known (at least to me) companies were Magic shows, guys
on unicycle juggling swords. You thought Central Park on a sunny summer
afternoon was a zoo, then you haven't been to an Internet show lately.
While wondering around, I got the both number for the NY LUG named LXNY. Strange acronym for a users group. They were located on the first floor show area way in the back. They could not have been further removed from the action. Ok, local users group, no money, perfectly understood. I introduced myself to the guys, signed up to their mailing lists and hung out for a chat with them. The guy in charge seemed to be a reasonable chap. He tells me he is a perl nut, or something to that effect. Cool, definite open source kind of guy. There was another guy working on an install of SuSE on his laptop. I peered over his shoulder and saw some of the installation pages as they flashed by while he selected this or that to be installed. Looked nice, a bit more polished than RedHat's install. There was another chap who told me how he partitioned his disk, (all wrong according to my rule of partitioning disks, (/, swap and /home and thats it...)) Then there was another guy who sported an old red baseball cap with the RedHat logo on it. Looked rather well worn. He had a scruffy beard, and we talked a bit. He told me that he knows Eric Raymond, the guy who wrote that "The Cathedral and the Bazaar" net-paper, from some Sci-Fi shows. He then goes on to tell me about his political slants. He's a libertarian. He tells me that he and Eric, when not talking about open source, talk about politics and guns. "Guns?" I say. Yes, guns. He then asks me if I believe in the first amendment. "Yes", I say. "Do you own a gun?", he asks. "No", I reply. "Then SHUT UP!", he snorts. Yup, guns and the first amendment go hand in hand. He continues to tell me how the 10 amendments have been eroded by the 'Government'. It's hard for me to carry on a conversation with this guy. Especially when it turns to Y2K and stocking up provisions for the aftermath.
Day 4) Up and at'em at 5:45am. The day turns out to be rather gloomy with rain forecasted. By now, my commuting routine is getting fine tuned. I got to the train station in time to leisurely buy my round trip ticket, coffee and bagel and have 1 minute to hang out on the train platform watching all the other commuters who had equally well tuned commuting skills. Getting to Javits, I go directly to the special events hall to hear the keynote which will start in about 10 minutes. The ambiance is much more subdued. The usual MecklerMedia add stuff on the now more mundane 5 screens rolls on unnoticed. (Its amazing the capacity of the brain to adapt to new levels of sensory filtering.) The speaker was the Chairman/CEO of AT&T, C. Michael Armstrong. What he had to say was rather boring compared to the previous two speakers. He had no gizmo to show off, or web pages to surf too. He basically announced one thing, the intent of AT&T to take over the internet as we know it. Fair enough. He boasted the recent $48e9 acquisition of MCI. He waxed about the quality and quantity of future AT&T cable modem services. In all, he came across as the most fine tuned image projecting CEO that I've met. (The only other CEO being Larry Ellison.) Still, I was rather amazed at the skill of this guy to project the image of Stability, Strength, Leadership. By the end of his speech, I wanted him to my grandfather. (Not for the money mind you.) I recently met NY's senator from Long Island, Al D'Amato. Al is on the opposite end of the spectrum to the CEO of AT&T. When I met Al, the bit which struck me the most was his total arrogance at the people around him and at the same time his attempt to try and look caring. He would crack a forced smile when meeting the audience he was going to speak to. When the cameras were on him, that forced smile would pop back into his mouth, and all the while, we would have this strange glare in his eye, trying to asses every one he shook hands with. Needless to say, he blew me off when I shook his hand. (No forced smile for me.) But he did have lots of smiles for my wife who was also in the hand shaking line. (And a kiss on her cheek to boot!) In contrast, was AT&T's CEO. This man had depth. Being around him gave you a sense of solemn. He was a family man. He set the stage for his speech by telling a joke involving his granddaughter. After he established himself as a caring family man, with his joke, he plunged ahead with talk of how AT&T will be in everyones home delivering those internet services to you via TV. I guess the big difference between Al and this CEO is the amount of money they truly control. Al controls his campaign funds. He really has little control over the US government budget. In contrast, the CEO controls BILLIONS and is payed mucho more for it than Al gets for voting in the US senate. So, the law of capitalism dictates. You get the Al D'Amatos to run the country and the C. Michael Armstrong's to rule the world!
After the keynote, I decided to take a break from the show floor and the TCP/IP sessions I came to attend, to listen to a discussion on the 'Adult Entertainment Industry' put on by the "Investing in the Internet" session. The session was well attended and the speakers were an interesting and diverse bunch in themselves. They had what I think was a technology consultant for Penthouse. They had some guy who recently wrote an article for Upside Magazine on the subject. Upside was sponsoring the session. They had a woman who owned her own adult Web site. And there was a guy from a research type firm who was trying to figure out how much money was being spent on adult web sites. The consulting guy for Penthouse when first. He groaned about the lack of payment for services rendered on these web sites. The researcher gave a short talk on how hard it was to figure out how much money was going into the Internet adult business. His conservative estimates, and believe me, from what he said are very conservative, is that close to $700,000,000 this year will be spent on guys looking a nude girls doing weird things to themselves and others. This is conservative. (i.e. looking at the volume of charges of 5 or so popular adult web sites.) Then the woman, owner of her own adult portal, raved about the wonders of the business. Its recession proof, it makes MONEY, (she broke even in 6 months, but she didn't say how much was invested up front), there are plenty of models waiting to get into the business. Its safe and virtual. And she thanked Bill Clinton for bring erotica into the main stream. She claims to have lots of brunets posing with cigars. One thing which annoyed me was this video camera which was filming this session. They had the audacity of panning the audience. I had to keep hiding behind the guy between me and the camera to make sure I would be seen on National TV watching this adult forum and then trying to explain to my boss why he should pay $1.4K for my registration fee. I know, its hypocrisy on my part, but that's just the way I am. So between dodging the camera pan of the audience and listening to panel mourn the difficulties of IPOing firms engaged in adult content I got out of the session with this urge to run off and make a billion in porn. Of course I'm not going to do so, but the guy sitting next to me will.
After my short diversion into the underworld of the Internet, I headed back out to the show floor. I had in mind lining up to get into the Digital Diner and perhaps get one of those Motorola burgers they were serving up. (I do a lot of work with Motorola embedded real time systems, so that Motorola burger would have been a cool fixture on top of my 21'' monitor.) But first I wanted to stop by the SuSE stand to see if I could get a copy of their distribution. I had picked up Caldera's and Pacific HiTechs. Nope SuSE was still out and my guess is that they ran out on the first day and the talk of getting more SuSE CD's for distribution today was just hype. There was a lot of action around the Oracle partners pavilion where the minor Linux distributors were being hosted. So I stuck around. I've heard a lot of KDE and SuSE packages it with their distribution, so I was checking out what the SuSE guy was demonstrating. After a bit I got engaged with the SuSE guy. He introduced himself as Todd Andersen, the guy who claims credit for getting the term Open Source accepted as the new term to replace free software. What a character. His background is with the Department of Defense. He rattled on for about 30 minutes about the spooks in the CIA, how the NSA was a serious organization and other ongings of our defense industry which I was trying to grapple with. I'm not sure how Todd got into the Linux business coming from the Defense Department, I missed that part of his introduction. Being a fair minded guy, and the fact that I'm rather in the RedHat camp, I thought I would offer to mirror their site. I'm currently mirroring RedHat's and spent $1K of the governments money in doing so. (You need a large disk.) The disk is not totally full and being that SuSE is making inroads into the Linux mainstream, I thought it appropriate that I also mirror this site. Todd and Bodo, (Bodo is the guy with green hair as described by Dan Shaffer on CNET radio's "Project Heresy" broadcast of Thursday Oct 8, who came from Germany to help out their US SuSE brethren.) got all excited about this, after telling them that the lab I work for has a T3 connection to the internet. I then proceeded to show Todd my Linux resources web page I've put up for people at Brookhaven National Lab, or around the world as that goes, to get some advice on how to get Linux installed on their machines. Todd was loosing interest in my web page due to other show attendees coming around to checkout their very nice KDE desktop setup. I bade them firewall and took off to checkout how the RedHat booth was doing. Over at RedHat, they were fielding many questions from a hand full of people. RedHat was going to get another shipment of CD's which they were going to start giving out at 2pm. I hung around with Melissa and some other chap who used to work for Los Alamos National Laboratory who got RIF'ed and is now playing a role in this leading edge company. He made the right move. He was also the guy who rescued the VA Research machine which arrived in a sorry state at the show. One side note I would like to mention is they guy from Adaptec who I met. As I was hanging around the RedHat booth, I heard some guy say he is from Adaptec. This caught my attention. To me, Adaptec is the premier provider of SCSI controllers for the PC/PCI market. Most motherboards you get these days have a built in Adaptec SCSI controller chip giving you an on board SCSI port, much like the on board IDE channel all motherboards today provide. With all the experience I've had installing Linux boxes, I've always run into the Adaptec kunumbdrum. Great hardware, but bad driver. I've had several instances where spanking new 23 Gig Seagate drives were attempted on a SCSI bus hosted by an Adaptec controller which failed miserably to integrate. My solution, forget the Adaptec built in Ultra Fast SCSI controller and spend $300 on a Buslogic SCSI controller. A sure win. Great SCSI hardware and an even greater driver to go with it. When I put my first Linux box together, I pondered the SCSI question. What controller. After poking around in the SCSI howto, I found that Leonard Zubkoff got direct support from Myplex to help write the driver, the decision to buy the Buslogic card was done. And true to the open source/Internet development environment, it was never more than 24 hours before Leonard would send me a patch to his driver when things went wrong. (At one point I had one differential card and one single ended card installed in my quad Pentinum pro box and things didn't boot right, and Leonard fix that problem quick.) So, back to Adaptec. Not too long ago I read a bit of news from the RedHat web page that Adaptec was going to embrace the Linux community, which meant that it was going to release the full hardware specs to the driver writers. Voila, I would finally count on being able to use all those on board SCSI controllers which I've had to ignore. But since ever since I read this great announcement, I have not been aware of any new Adaptec driver updates, so as far as I know. So, I gave this guy from Adaptec a my long story I just dump on you and he replied with some interesting inside info. First of all, we was not with the SCSI development team. This guy was a sys admin for Adaptec. But he did tell me that Adaptec has been going through some hard times. With its success in the SCSI market, Adaptec decided to diversify into a whole bunch of other high tech field, none of which they turned out to be any good at. He told me that the Adaptec stock peaked at $50 something a share and now was down around $5 or so. This has forced Adaptec to go back and concentrate on its core business. Along with that, he tells me that Linux is really big inside the company. He tells me that there are a lot of Linux peraphenalia, and he picked up the RedHat bumper sticker which lay in front of us and pretended to tack it on to an office cubical wall. "You see a lot of things like this, around the company", he said. Just like the rest of us, the Adaptec employees saw the light in Linux and my guess is that Adaptec's announcement to support the Linux effort came from a movement within the company. From the employee's themselves. I found that insiders view of Adaptec to be rather interesting.
Melissa told me that the RedHat CD handout was going to occurs at 2pm, being around 1:15 pm, I decide to go get lunch and then head for the afternoon keynote. It was raining rather hard so the hotdog stand was out and I had to spend lots of money on a rather simple barbecue sandwich in the overdone Javits cafeteria. From there it was on to the special events center where I waited for about 20 minutes for Jim Barksdale, the Netscape cheefo to give his view of the Internet world. I thought I was in for a surprise when the music which preceded the talk was a cool jazz piece. This is good, no need for super hyped up rock sounds beating your adrenaline system into hyper drive a-la Oracle. The problem was, as I found out within a few minutes into Jim's speech, he was a total bore. He lacked everything. No charisma, no attitude, no inner-drive, nothing. This guy reminded me of mashed potatoes. Netscape, as far as I'm concerned, is the only browser one should use. Maybe if there was a Linux port of IE, I would try it, but without that, there is nothing else which is graphically based. So, here he is, talking about The browser but has no charisma to put the punch into his presentation. I, along with the rest of the audience, was losing my attention for what ever message he had to deliver. The selling point of Netscape was the ability to type a key word into the URL field and the browser would 'find' the page you were looking for, and the 'what's related' button next to the URL field. He spent some time, too much time, plugging this feature. He then talked wonders of the customizability of the browser, either for ones own personalification or to setup some 'portle' for some company too lazy to higher a good webmaster with the proper Java skills to do the job right. At the end of his keynote, James took off without giving the change of the audience to approach him afterwards for a question or two and/or to exchange business cards. Another flop move. So be it for Jim. Although I could hardly sleep the night I found out that Netscape was going to release the source code via their Mozilla.org site. Jim hooked his wagon to the right company at the right time, nothing more. He talked about running Federal Express before running Netscape. Somehow I can't see the connection between the two companies except that there is something which went wrong here. Steve Jobs and Bill Gates grew up with the field, Jim Barkesdale seems to have dropped in like an uninvited guest. I guess its much the same as the guy Steve Jobs hired to run Apple who eventually dump Steve from Apple. Us technophites need to learn some lessons here.
After being let down by the Netscape keynote, I rushed back up to the
RedHat booth to see how the CD handout was going. It was going well. There
was a line of about 20 to 30 people long waiting to get a RedHat CD. I
took the opportunity to take some pictures of the line of Linux users to
be. With that, I wished the RedHat team good luck in there endeavors and
took off to my last session, "Migrating to IP version 6." This session
was given by two IBM consultants out of North Carolina. My first tag team
seminar with that same tele-evangelist delivery. IPv6, Amen!
I was expecting the session to go until 5:30 but ended an hour earlier.
I was planing to then roam around the show floor a bit more looking to
see if there were any after hour networking parties to go to. But somehow,
after getting to the main entrance plaza of Javits, with the rain coming
down, and not having much of a stomach for more Internet World show biz,
I canned my plans and made a bee line to Pen Station to catch the 5:22
train back to Long Island. Once on the train, I had an hour and a half
to ponder my last 4 days. I've only been to scientific conferences. The
last one, Computing in High Energy Physics, I found to be rather tedious
and left after two days. (I had a good excuse, the DAQ system for the experiment
I'm on was acting up and they needed their expert back in house. Although
I could have, and did, solve all their problems by walking the clueless
over the phone, through the various button clicks to get back into full
data taking mode.) After I put Linux on my first home grown PC, 3 months
after getting my Ph.D., my life has been so tied up with this OS that I've
often pondered why I continue working at a High Energy Physics Lab. I've
done my best to aid Linux gain inroads into the high energy and nuclear
physics community by porting a lot of Fortran code to Linux. I've also
leveraged my position at the lab to put together the first official group
of Intel PC's running Linux for the scientists to analyze their data. Being
in the DAQ subfield of physics give you a high point from which to watch
how the technology used to bring the Internet to life evolve through time.
My work has been all internet, Unix workstations, data over IP, (Gigabytes
of Data and now going on to Tera bytes), routers, switches, e-mail, html,
java, X11 and on and on since I first learned how to program a computer
back in my first physics lab when I was 18. Walking around the show floor,
and going to the sessions brought my whole world around. Internet world
is really my world. I knew in depth or otherwise, every aspect of what
was being presented at that show. And with the Linux people there, this
added gravy to show. It was some 4 days. Friday I'll be back to helping
BNL users find their way through the Unix/Internet maze of the lab. Monday
I'll be back worrying about why I can't sustain 20Mbytes/sec data throughput
in our DAQ, or rather why the clueless users seem to stumble all over my
DAQ system. But for now, on my ride home, I just let all those memories
of Internet World swirl around my head, as I looked out the LIRR train
watching Long Island sweep by.
Saturday, September 26, 1998 was a big day for the Linux community in Canada--that day the First Canadian National Linux InstallFest was held.
The InstallFest was organized on a national level by CLUE (Canadian Linux Users' Exchange) to provide interested people with experienced help installing Linux on their computers. CLUE is an organization that supports the development of local Linux Users Groups, and co-ordinates events, corporate sponsorships and publicity at a national level. CLUE hopes that by enhancing association and communication amongst its developers, users, suppliers and the general public, it can increase the use and appreciation of Linux within Canada.
A dozen different events were held across Canada, from Halifax to Victoria, all taking place on the same day by Linux User Groups.
The Montreal event, at its peak, had as many as 100 people in the room at once and by all accounts, had 200 to 250 people stop by. They did 40 installs, only 20 of which were from preregistrations. They even had the crew of the local TV show Branch stop by for an interview, due to air in November. Also worthy of mentioning, they had guru Jacques Gelinas, the author of the LinuxConf software, answering questions.
Two InstallFests were held in the Toronto area: one at Seneca College and the other at the University of Toronto Bookstore. The Seneca College event had a late start due to a power outage, but more than made up for it as the unofficial count of installs is about 100. They even rolled out their Beowulf class Linux cluster for the masses to look at and see how a few ``small'' Linux boxes can be turned into a Supercomputer.
The Manitoba UNIX Users Group, (MUUG) held their InstallFest at the University of Manitoba, as two-day event beginning on Friday. As this was their first InstallFest, they deliberately kept it small and aimed mostly at the faculty and students of the U of M. About 140 people attended, with more than half purchasing a Linux CD, as well as 19 successful installs. Attendance was greater than expected, probably due to the national news coverage the event received. At least one person came in who said he had discovered the InstallFest by seeing a segment about it on CTV News-1, a National News network.
The MUUG web site made mention of one more interesting story from the event. One attendee brought in a system which became known as ``Franken-puter''! It was apparently two separate cases tossed together with all sorts of spare parts the owner was able to scrounge up, and connected with a piece of coax Ethernet cable. He spent as much time swapping parts and reconfiguring on the fly as he did installing Linux. He apparently showed up at the start of the event on Friday and didn't finish until mid-afternoon on Saturday. Even after all that, he still hung around afterwards to help others with their installs.
The Ottawa InstallFest was hosted by the Ottawa Carleton Linux Users Group (OCLUG). While almost all the other events were held in a more academic setting of local colleges and universities, OCLUG had their event sponsored by NovoClub, a local retail store. NovoClub is located in a shopping mall and managed to get an empty store front for OCLUG to use. They also arranged for display kiosks to be set-up in the mall by several companies. There were training companies, a local ISP and most notable, Corel Computer displaying their NetWinder, and of course, NovoClub was offering specials on their very large selection of Linux products. The whole event was more like a mini trade show than a typical InstallFest.
The unofficial count at the installation store front was that 250 people came through the door. This count included those that came to have Linux installed on their machines, members of the press and ``just curious'' folk who stopped to ask questions, while wandering around in the mall.
OCLUG chose not to have people preregister, they decided to just let people come and register the day of the event. It was supposed to start at 10 AM and go to 5 PM. However, people were lined up at 9 AM when the mall opened, and they soon ended up with a backlog of machines waiting for Linux installation. By 3 PM they were two hours behind and had to start turning people away. By the time it was over, they had installed Linux on 50 to 60 machines and still had 10 they could not finish.
Not all events were as big as the ones listed above. The New Brunswick Linux Users Group had only ten people attend, with four successful installs. They were a bit upset at the low turn out. However, it was also Homecoming week at Mount Alison University in town, and a football game was in full swing at the same time as the InstallFest. They are in the process of designing a tutorial for their new users and anyone else who is interested. The Fredericton InstallFest was a little larger with thirty attendees and ten installations.
The general consensus is that as a public relations event, the InstallFest was an overwhelming success. It got a lot of people asking questions about Linux, some of whom took the plunge and installed Linux for the first time. However, it was not completely successful as a technical event. By no means is this a reflection on either those who organized the individual events or the volunteers who helped with the installations--they all did a stellar job--just no one was prepared for the magnitude of the response.
Most LUGs asked people to register prior to the event. This allowed them to get as many volunteers as they thought they would need. Some groups, like the Vancouver Linux Users Group were swamped with preregistration and had to halt registration prior to the event because they could not accommodate everyone. Even with preregistration, the day of the event was hectic. The report from Seneca College in Toronto was that their event lasted until 9 PM, and they were still unable to complete all the installs. Other events had similar reports, and despite the best laid plans, the response overwhelmed the number of installers.
Some installs were unsuccessful, either due to time constraints or hardware compatibility issues that were not easily overcome. That said, the ratio of unsuccessful to successful installs was minimal. In most cases, it was one or two to fifty. I've seen more failures on MS Windows installations than that.
One of the interesting side-effects of the OCLUG InstallFest was that preliminary discussions were started between Zenith Learning Technologies and Corel Computer to set up a corporate Linux training program. Also, Oliver Bendzsa of Corel Computer reportedly said that he was as busy at the InstallFest as he was at Canada Comdex, a 3-day trade show that drew some 50,000 people in Toronto.
Dave Neill, a founding member of OCLUG, said that while grassroots events like the InstallFest are a great way to promote Linux, it is now time to start approaching local computer resellers and show them there is a demand for systems with Linux pre-installed. I work for Inly Systems, the largest independent computer reseller in the Ottawa area, and while we are now expanding the variety of Linux products we carry, we still do not offer Linux pre-installed on our machines. However, with at least three technicians who have experience with Linux and/or UNIX installations, we could do this if people began asking for it. However, we are an exception; most resellers don't have technicians with Linux experience.
One of the issues that must be answered is how and where companies can have their technicians trained. This is where training companies like Zenith Learning Technologies come in. The fact that Zenith was at the OCLUG InstallFest shows that they realize the potential for Linux training. With such companies as Corel, Oracle, Intel and Netscape investing time and money in Linux, it won't be long before other training companies jump on the bandwagon.
Plans are already in the works for a Global Linux InstallFest next year. If you would like to know more or would like to get your LUG involved, please check out the CLUE web site at http://www.linux.ca/ and contact Matthew Rice. An event of this magnitude will need lots of help organizing, so don't be shy--watch out Bill, the Penguin is on the move!
For more information on the individual InstallFest events, please visit the CLUE web site for a list of links to all the participating user groups.
muse:
|
elcome to the Graphics Muse! Why a "muse"? Well, except for the sisters aspect, the above definitions are pretty much the way I'd describe my own interest in computer graphics: it keeps me deep in thought and it is a daily source of inspiration. his column is dedicated to the use, creation, distribution, and discussion of computer graphics tools for Linux systems. |
This month marks the second anniversary for the Graphics Muse column. Its hard for me to believe I've been doing this for that long. My general span of attention is about a year, but I've managed to not only hold onto an interest in this column, I've managed to grow it into several articles and covers for the Linux Journal, a book, and a web site devoted to computer graphics and based on this column. I guess when you get on a roll, stick with it.
The more observant readers will also notice a little change in format for this column. I finally did a little color matching for the various images I use and, at the bequest of more than just a few readers, got rid of the multicolumn articles. Most of the announcements are on a page of their own now, although I will be keeping a few on the first page. Overall, I much prefer this new format. It just looks cleaner. I hope you like the changes.
In this months column I've taken a chance and offered a little editorial on the way things are as I see them. Much of what I've seen in the past few months revolving around Linux has been positive news - announced support from all 5 major database vendors (Oracle, IBM, CA, Sybase, and Informix), Intel and Netscape announcing investment in Red Hat, and lots of generally good press. But along with this I've seen a fair amount of disunity among the community. There are camps forming between followers of various leaders. I find this sad. Hardlines drawn by groups with disparate interests and ideas tends to drain the energies of both sides of the argument and I'd really hate to see that happen with Linux. The worst aspect of these arguments is the distraction thats created from the real focus - proving how Open Source/free software can really be viable solutions to end users, not just developers. Thats key to making Linux a world player in corporations, education, government and on the desktop.
In this months column you'll find:
Dirk Hondel has put a new version of XFCom_Matrox on the ftp site and updated the web site at http://www.suse.de/XSuSE/XSuSE_E.html. The new server should work on all current Matrox boards, including the
Please report any problems with these servers to x@suse.de |
The Matrox Meteor is a high end professional quality video capture board commonly used in demanding video capture applications such as laboratory research, robotics, and industrial inspection. It's video quality and clarity of it's captures are generally notably superior to the garden variety consumer grade image capture devices, and it's price reflects this.
This driver is bundled with single frame capture software, software for displaying real time video in a window, patches to make the meteor work with "vic", a Linux video conferencing package, and other goodies. The "official page" for this package is found at http://www.rwii.com/linux/. Other information about this driver can be found at http://www.cs.virginia.edu/~bah6f/matrox/.
Like the numbering scheme
for the Linux kernel itself, the odd middle numeral in the version number
("5") indicates that this is a "development" release. It however
contains numerous enhancements over the last "stable" release, not the
least of which are the ability to compile without hacking on the latest
development and stable linux kernel versions, as well as the ability to
compile and run properly on libc6 based distributions. In actuality,
this "development" version should prove to be pretty much as stable as
the last "stable" release.
The URL is http://crystal.linuxgames.com
Crystal Space is a free (LGPL)
3D engine written in C++. It supports colored lights, mipmapping, mirrors,
reflecting surfaces, 3D models/sprites, scripting, and other features.
The purpose is to make a free and flexible 3D/game engine. Crystal
Space is also a rather large open source project. There are currently about
182 people subscribed to the developers mailing list. You can join to!
Bob Hepple
mailto:bhepple@bit.net.au
http://www.finder.com.au
DC20Pack
is a Software Package for Kodak DC20 and DC25 digital cameras which contains
two programs: dc20term and dc2totga.
dc20term transfers the pictures out of the camera and stores they as raw
data files. dc2totga converts those raw data files to standard image
files using the popular TGA image file format.
URLs:
ftp://sunsite.unc.edu/pub/Linux/apps/graphics/capture/dc20pack-1.0.tgz
http://home.t-online.de/home/Oliver.Hartmann
The following note was posted to the GIMP Developers mailing list on October 19th, 1998:
I'm writing from Australian Personal Computer magazine and would like to congratulate your having won an Award at our annual IT Awards evening last Thursday.The official award announcement can be found at http://newswire.com.au/9810/award.htm.We have a beautiful crystal trophy we would like to send you having won in the Productivity Software of 1998 category. Please can you forward me your street address and phone number as I would like to send this by courier to you.
Regards
Helen Duncan
New Media Projects Manager
Australian Personal Computer
The award is being shipped
to Peter Mattis who will be placing the trophy in the lobby of the XCF
(Experimental Computing Facility) at Berkeley, which is where the GIMP
has its origins. Congratulations to all those involved in the evolution
of the GIMP!
...For those of you who don't read GIMP News, Zach Beane has added a couple new tutorials to http://www.xach.com/gimp/tutorials/....at a refresh rate of 60Hz or lower, you'll often detect an eyestrain-causing flicker on your screen. Flicker generally disappears at 72Hz; the Video Electronics Standards Association's (VESA's) recommended minimum for comfortable viewing is 75Hz. Whichever card you buy, in any price range, be sure that it and your monitor can synchronize to provide at least a 75Hz refresh rate at your highest preferred resolution and color depth.
From ComputerShopper.com's article "Performance on Display"...a poll is being run by lumis.com asking which platform you'd like to see Alias/Wavefront's Maya 3D product ported to. Go there and tell the world - we want graphics tools ported to Linux! Slashdot had reported this link and noted that MacOS was way out in front, but the Slashdot effect &tm; had already taken by the time I got there and Linux was in front once again....you can find collections of free fonts all over the Internet. Take a look at the following sites:
http://www.fountain.nu/fonts/free.html - TrueType only (PC format downloads is in small type)...another 3D modeller is under development, this one using C and Tcl/TK. This one is called Mops and has support for NURB curves and RIB export files. Take a look at The Mops Home Page.
http://www.signalgrau.com/eyesaw/html/main.htm - TrueType and Postscript Type 1 fonts (pfb)
http://www.rotodesign.com/fonts/fonts.html - Type 1, but sans most punctuation and some numbers
More sites can be found from Yahoo's listings: http://dir.yahoo.com/Arts/Design_Arts/Graphic_Design/Typography/Typefaces/...there are a couple of newsgroups being run off the POV-Ray web site for the discussion of POV-Ray, the 3D raytracing tool and the display of images. Take a look at news://news.povray.org/povray.binaries.images and news://news.povray.org/povray.general.
...a very good explanation of using matrix transformations with POV-Ray can be found at http://www.erols.com/vansickl/matrix.htm. Additionally, you can find some useful POV-Ray macros at http://www.erols.com/vansickl/macs.htm.
A: There is no smudge tool. It has been oft requested, but noone has written one. Some not quite the same alternatives: the blur tool, iwarp, or selecting a region a bit and applying a gaussian blur. Not the same, but alas...
Adrian LikinsQ: I want to place a block of text with evenly single-spaced lines using some arbitrary font onto my Gimp image. Rather than doing it line by line with the Text Tool, is there an easier way?
adrian@gimp.org
A: While the Ascii2Image is probably the nicest solution, there is another somewhat more obscure method. Using Cut and Paste into the text tool entry, the the text tool has no problems with newline characters - you can make multiple text lines directly from the text tool this way.
Seth BurgessQ: Is there any way to get gimp to use virtual memory instead of its swap file? I was working on some images where the gimp swap file was about 30mb. Just about any operation I do causes lots of disk activity. The machine I'm running this on has more than enough physical memory, but it is not being used.
sjburges@gimp.org
A: Change the value for the gimp tile cahce in the Preferences dialog. I'd say with 160mb set it to at least 80megs or so.
Adrian LikinsQ: Ok now I'm new to linux and gimp - my friends got me into linux in the last couple months. How can I, in Gimp save a file without having to merge the layers and still have the graphic look the way its supposed to? Am I just really missing something here?
adrian@gimp.org
A: If you just want to save an "in-progress" verison of your image that preserves layers, guides, channels, selections,etc then you should be saving as .xcf. That's gimps native format.
If you want to "export" an image to a single layer format but not have to merge the layers, you should have a look at Simon Budig's export scripts that automate this task. These scripts can be found at:
'Muse Note: as you can see, Adrian and Seth offer some pretty good advice on the Gimp User's Mailing list!http://www.home.unix-ag.org/simon/gimp/export-file.htmlAdrian Likins
adrian@gimp.org
Hi Mr Hammel,'Muse: Woohoo! My favorite kind of reader mail. Ok. I'll stick around for a while longer.
Looking at the April 98 issue...Reader MailLove the column,
Nick Cali (Mktnc@aol.com) wrote:
Just want to drop a line thanking you for your effort at the Gazette and with Linux. Really, thanks a lot.Muse: You're quite welcome. I had gotten some rather harsh email from someone recently that had me considering dropping out of the Linux world altogether. Getting little notes like this, however, helps keep me going. Thanks!
Please stay,
'nuff said.:-)
angus@intasys.com
In a previous message, Rolf Magnus Nilsen says:
I'm really sorry for bothering you with this problem, but as an avid reader of the Linux Gazette and the Linux Journal I have read most of your writings there. And hope you can take the time to answer some questions.
'Muse:
No problem. I try to answer all the questions that come my way, if
I can.
Now, we are going to do a small project in VHS video, and we need some tools for video editing. The problem is, we cant find any tools besides the simplest command line tools.'Muse: Thats because there aren't any "canned" tools yet. See below.
So our current plan is to run a framegrabber, grab about 25 pictures a second, organise them, put in effects/text and use mpegencode to make a movie which we play back to our VCR. But this is quite a task, when you consider a movie of about 45 - 50 minutes.'Muse: Unfortunately this area of graphics tools on Linux is pretty sparse. Like you said, there are a number of command line tools for doing very specific tasks (like frame grabbers or creating MPEG video animations) but there aren't any user-friendly, GUI based tools like, for example, Adobe Premier.I have been searching around quite a bit, but have not found anything better than the tools I mentioned.
Do you know any resources or products I should have a look at. Buying a commercial product is OK if it runs under Linux..
That said, there is one project you might want to look into. The project is called Moxy (http://millennium.diads.com/moxy/). Not much information there yet, but its aim is to be a Premier-style application. Its in *very* early development.
You might also drop a line to the Gimp-Developer mailing list. A number of people had been discussing creating an application like this on that mailing list. I haven't heard whats become of this, however. Adding a plug-in to the Gimp wouldn't be the best way to handle video editing - the Gimp isn't designed for that type of work. But eventually interfaces should be (re: ought to be) developed that allow easy transfer between the Gimp and video editing tools.
No commercial packages that I know of are being ported yet. Desktop publishing on Linux is still somewhat limited to word processors and the Gimp, which lacks color management facilities that are quite important to most desktop publishing and video editing environments.
I'll post your message (actually this reply) to the next Graphics Muse column and perhaps someone with more information than I have will contact you. If you hear of any commercial packages being ported let me know. I'd love to start hearing of such ports!
BTW: I'm really looking forward to "The Artists' Guide to the GIMP", it is ordered already :-)'Muse: Hey! A sale! The first official one that I know of. I hope you find it useful!
In a previous message, Dylan The Hippy Wabbit says:
I have a particular interest in stereoscopic vision, and so I would like to have an X server that supports shutter glasses.'Muse: (Note - doesn't anyone go by their real names anymore?) Ouch. My eyes are hurting already just thinking about these. People (like me) who have one eye "stronger" than the other can't see these images, at least not very well. They give me a headache (so do 3D glasses).
In case you haven't heard of these, they use liquid crystals to alternately cover each eye. The display then alternates in phase so that each eye sees only one view. Apart from it's use in photography or molecular modelling it makes one hell of an extension to Quake!'Muse: No such beast is yet available. Its just not in high demand so you probably won't see it from the commercial vendors unless a paying business customer requests it (with some serious dollars behind the request). XFree86 will support it as soon as someone decides they want/need it and have the time/expertise to write the code for it. If the video cards handle it already then its just a matter of adding that support to an existing video card driver (assuming a standard, well known video chipset on the card). The problem is usually finding someone who knows how to do that. A post to comp.os.linux.x or maybe a letter to the Linux Gazette editor (gazette@ssc.com) will put you in contact with someone. The LG editor will simply post your request in the next issue of the Gazette and, with luck, someone will contact you about their current work in this area. You might also try sending a letter to the XFree86 support address (its listed on their web site www.xfree86.org).Some, although only a few, 3D accelerators support them and there is an extensive web site including homebrewed controllers at:-
http://www.stereo3d.com/3dhome.htm
However, I can't find any mention of it in the XFree86 docs. The AcceleratedX web site mentions support for "3D PEX" which I assume is a typo, although it could be something genuine I've never heard of. I've searched the LG archive to find only your mention of a POVRAY "beamsplitter" in issue 27.
Do you know of anything? After all, we can't let DOS/Windows users have anything we can't get can we? ;-)
I'll post your message in the November Muse column. Maybe one of my readers will contact you about this. Keep your fingers crossed!
BTW, 3D PEX is not a typo.
PEX is the PHIGS Extension, a formal X Extension that supports PHIGS, which
is the Programmers Hierarchical Interactive Graphics System. Thats
a sort of OpenGL from the earlier days of computer graphics, although its
still in use today in a few places.
Normally I wouldn't consider using stock photos from Web-style CD collections because the quality of the photos generally isn't much better than what I can take myself. Additionally, most of those "25,000 (or more) Image" collections you find on the shelves come with images suitable only for the Web - generally no more than about 1024x768 resolution. These usually are far too small for any other media.
But an article in the September 1998 issue of Digital Video magazine covering stock image collections mentioned the Corel image collections, including their Super Ten Packs, as a source of quality stock images. Since I trust this magazine more than my own common sense (which is still rather new to the graphic arts world) and due to Corel's fairly full-blown support for Linux, I decided to check out one or two of these collections.
What is a Corel Super Ten Pack?
The Super Ten Packs are collections
of 10 CD's, each with 100 PhotoCD images on them. The current collections
are classified into a number of different categories:
Aircraft | Food | Seasons |
Animals | Gardens | Sports & Leisure |
Architecture | Great Works of Art | Textures |
Art, Sculpture, & Design | Landmarks | Textures II |
Business & Industry | Museums & Artifacts | Textures & Patterns |
Canada | Nature | Textures & Patterns II |
Cars | People | Transportation |
England | People II | Travel |
Fashion | People III | Underwater |
There is also a Sampler Ten pack. The sampler set has CD's titled, among others, "War", "Alien Landscapes" and "Success". Unfortunately the limited documentation doesn't say from which other Ten Pack's these samples are taken. I expect that Corel will expand this list further as well, since they tend to produce a large number of stock photography CDs in general.
The images are royalty free but there are some restrictions to their use. First, you must display the following text somewhere in your publication:
This product/publication includes images from [insert full name of Corel product] which are protected by the copyright laws of the U.S., Canada and elsewhere. Used under license.Since I'm reviewing the CDs in general I hope the above counts towards my meeting this requirement. They also limit online display of the images to 512 X 768, but that may be only if you display the image unmodified. Its not clear about if such restrictions exist for derivative works that use the images.
How do you get them?
The Super Ten Packs are available at computer retail outlets or online. I purchased my two sets from MicroCenter here in Dallas. Corel's online site contains thumbnails of all the images from their huge collection of images so that you can preview them before purchase. All of the online versions have watermarks so don't get any ideas about trying to swipe them from their site (unless you like watermarked images).
Online ordering can be done at http://www.corel.com/products/clipartandphotos/photos/superten.htm. You can also search for individual images and order those online at http://corel.digitalriver.com/. I didn't check to see if you could actually order the photos individually or just in the sets that contain them but a reliable resource who has used the service in the past suggested you could purchase them individually.
When you go to http://corel.digitalriver.com/ just click on the Photo CD package image to get a list of titles. From there you can click on the individual CDs to preview all of the images on each CD. Each CD runs about $35-$45US.
What do you actually get?
I purchased two different sets, the Sampler Ten Pack and the Textures II Ten Pack. Both run a little higher at the retail outlet, as expected, and came in boxed sets. Inside the box I found the 10 CD's shrink wrapped along with a small pamphlet. The pamphlet had the obligatory licensing information along with full color thumnail images of all the images on each CD, one page per CD. This is quite useful and something I hadn't quite expected for some reason.
The images on the CD come in PhotoCD format. This format specifies 5 different image sizes:
128x192To read this format you have a couple of options. First, the Gimp has a PhotoCD file plug-in. You can tell if you have this plug-in installed if you try to open an existing file and the Open Options menu includes an entry for PCD. If you try to open a file from the CD by double clicking on the filename in the Load Image dialog then the plug-in is started and you get the dialog shown at left. You'll notice that this plug-in offers the additional resolution of 4096x6144. I'm not certain if this is a valid PhotoCD resolution or not, but it didn't seem to matter. Unfortunately, I was unable to read any of the images from the CD in resolutions higher than 512x768 using this plug-in. I had to switch to an alternative option, the hpcdtoppm tool from NetPBM package. With this program I could read the higher resolutions - up to 2048x3072 - into a PPM formatted file which I could then load into the Gimp. I didn't have time to determine if the problem was with the Gimp plug-in or the CDs, but I suspect the plug-in is at fault since I could read the higher resolutions with hpcdtoppm. Note that this plug-in works fine for resolutions up to 512x768.
256x384
512x768
1024x1536
2048x3072
|
RMS vs. Raymond vs Users
Both RMS (Richard M. Stallman) and Eric Raymond have done wonders for the community and both should be applauded for their efforts and dedication. However their spirited enthusiasm, in the manner and form which they display in public, is not necessarily what we need now. Linux and free software/Open Software is a community, one that has grown beyond its bare communal spirit and now encompasses a metropolitan mix of individuals and groups. And that mix includes a high number of end users - not developers, not hackers - users. I wonder now if either RMS or Raymond is truly interested in the end user or is their focus solely on the developers needs. At this point, the community needs to focus on both.Commercial vs. Free and World Domination
Unlike many Linux fans, I have no problem with commercial (re: proprietary) software. There are people who both need and desire commercial software, regardless of what developers might find as the higher moral ground. I personally will use the tools which best suit my needs. I have always wanted a Unix desktop, ever since my days working on the Dell Unix products in the early 1990's and Linux is it for me. If commercial applications begin to show up that work well for me, I will use them. I already use Applixware and commercial versions of the sound drivers and X server. You don't have to encourage commercial development, but you shouldn't attack them either. Having a different point of view does not make someone wrong or generally evil in all cases. If you provide alternatives to commercial products you'll find many people who will both use and support those alternatives. But to disuade others from using commercial products without first providing the alternative is tantamount to using the same tactics Microsoft uses with their vaporware announcements. Convince by doing first. It makes the counter argument, the argument for commercial or proprietary software, more difficult to sustain.Vi vs. EmacsOn a related subject: World Domination by Linux is not a goal I seek. The first reason is obvious - if you displace Microsoft you lose the strongest focal point that currently exists for the free software movement - the drive to displace Microsoft. It is a bit of a catch-22 scenario, but I'd rather have Microsoft stay strong to keep developers on edge in the Linux community. They seem to thrive on that. Without real leadership in our community (and I'm not convinced we have that one strong individual or group that can claim that leadership role) it is imperitive that the strong focal point be kept clear. Focus is key in any project, be it writing software or climbing mountains or writing columns like this one.
The other reason I don't want world domination is I really don't want to replace one egotistical maniac with several thousand (or million). Great developers are egotistical - its a form of self confidence not unlike that displayed by great artists. But I wouldn't want either in charge of my personal computing world. They see the world from their perspective and that perspective can be clouded by their own intellect. It can be difficult to see the frustration of others when their problems may seem trivial to you and easily solved. Instead, I'd rather have the ability to control my own computing environment by having the opportunity to choose between multiple solutions to similar problems. I'd love to see the Mac and BeOS expand their market share because, in the end, it only opens up my vistas of choice. And thats what Linux is really about for end users. Freedom of choice.
Vi, of course. Unless I have to write a book or article for non-Linux publishers. Then ApplixWords.Red Hat or Debian or S.u.S.E?
Depends on what you want and where you live mostly. All three produce decent distributions. I tend to think of Debian as aimed more towards the technical crowd while the other two are more amenable to the average Joe. I use Red Hat 4.2. Why? Because 2 years ago when I was ready to upgrade from my Slackware distribution I went into SoftPro Books in Denver and found Red Hat abundantly stocked. S.u.S.E wasn't there yet. Neither was Debian. It was a simple choice back then, really. But like Linux in general, the good news is that I have choices. Thats important. I'll be upgrading again at the start of the year, probably in February. By that time most of the kinks with dealing with libc/glibc should be worked out from the installation point of view. I may go with Red Hat 5.2 if its out by then. But S.u.S.E sure has had a lot of good press too. But it probably doesn't matter that much. I don't even use RPM's on my machine except during an initial installation. After that, I install free software from source and commercial packages from CDs (in whatever form they come in).GPL, LGPL, NPL, or Artistic License
See what I mean? Choice. This sort of thing seldom crops up in the Microsoft world. Which is best? I won't say. Of all the arguments that have arisen repeatedly the past 2 years, this one is most certainly one of personal choice. I will recommend, however, that if you consider releasing software to the free/Open community that you read through each of these and try to understand them before releasing and before creating your own license. I did the latter. It was a bad choice.Where to go from here - Desktop Graphics
GPL: http://www.gnu.org/copyleft/gpl.html
LGPL: http://www.gnu.org/copyleft/lgpl.html
NPL: http://www.mozilla.org/NPL/
Artistic: http://language.perl.com/misc/Artistic.html
Ok, I've blabbered on for too long with my own opinions that really have nothing to do with graphics on Linux. I need to focus. What do we have now and what do we need? How do we get it? And who are "we"?'
We are the people who desire the tools to do the graphics arts work from which we both find enjoyment and make our livings. As of now, the tools for Linux are mostly geared toward Web development, a medium born from the same family as the images we create. Most of the tools are command line driven, with a few GUI-based tools like the Gimp or perhaps ImageMagick. But we lack certain features to go beyond Web images. We lack any real form of color management in the Gimp needed for prepress operations. We have 3D modellers but are they sufficient for commercial animation work? And what about video editing tools? Nothing exists at this point beyond one project in a very early stage. We have some hardware acceleration for 3D video chipsets but lack consistant support from vendors. Most important, we need a desktop that makes porting of applications - or writing new ones - inviting to those who need to interact with other tools.
There are plenty of tools available for commercial artists and effects houses that already exist on other Unix platforms. What would it take to make those people want to migrate to Linux? Vendors are fond of saying that end user demand is what drives ports to new platforms. We need to know if the demand exists and if not, then why not. I've spoken to two effects houses in the past who use Linux in rendering farms (groups of Linux servers number crunching 3D images with little to no user interaction). Linux as a server once more. Is Linux not appropriate as the front end of the special effects development process? What about for Desktop Publishing? All you Quark and Adobe users - what do we need? Would you use the ports if they were made available?
I write this column out of a desire to learn about computer graphics. The only graphics tools I'd ever used before moving to Linux were MacDraw and MicroGrafix under DOS many years ago. I'm not familiar with the Adobe series of graphics programs, nor Quark Express, nor the SoftImage tools or other SGI-based applications. I need feedback from users of these tools to know what to pass on to the rest of my readership. There are likely to be a few who would be willing to work on projects, if they new what needed to be done. And grass roots efforts by end users to convince commercial vendors that ports of existing applications to Linux would be worth their effort are also needed. Corel appears to be porting all their applications to Linux. I assume this means Corel Draw will be coming out sometime in the next 6 months. At least then I can see what a commercial application looks like. If I could only get my hands on Adobe Premier or Quark Express for Linux.....
Most important of all, I need to know what the readers need - desktop tools for the small prepress environment? Web tools? High end graphics tools for research and the entertainment industries? Perhaps multimedia authoring tools? Or just simple tools for doing common tasks at home, those that are readily available for the Mac and MS platforms and cost a buck and a quarter at the local computer retail outlet.
Graphics on Linux needs focus. We have the kernel supporters and the desktop supporters who have driven the server side of Linux to the point that the rest of the world is not only aware of Linux but enthusiastic about joining the community. Now we need the graphics folks to mobilize and show that we can go beyond the realm of back room servers.
Or can we?
[ More
Musings ]
Online Magazines
and News sources
C|Net Tech News Linux Weekly News Slashdot.org General Web Sites
Some of the Mailing Lists
and Newsgroups I keep an eye on and where I get much of the information
in this column
|
Let
me know what you'd like to hear about!
Graphics Muse #1, November 1996
Graphics Muse #2, December 1996
Graphics Muse #3, January 1997
Graphics Muse #4, February 1997
Graphics Muse #5, March 1997
Graphics Muse #6, April 1997
Graphics Muse #7, May 1997
Graphics Muse #8, June 1997
Graphics Muse #9, July 1997
Graphics Muse #10, August 1997
Graphics Muse #11, October 1997
Graphics Muse #12, December 1997
Graphics Muse #13, February 1998
Graphics Muse #14, March 1998
Graphics Muse #15, April 1998
Graphics Muse #16, August 1998
Graphics Muse #17, September 1998
Graphics Muse #18, October 1998
© 1998 Michael J. Hammel |
|
The Blender Manual is available for purchase: http://www.blender.nl/shop/index.html. Looks to be about $49US for a fairly hefty manual. Moxy 0.1.2 Moxy is a linear video editor, much like Adobe's Premiere. It can load many different file format (Including MJPEG AVIs, P*Ms and JMF) and output AVIs. It comes with some transitions (you can make some yourself, they're plugins) and you are free to contribute code. http://millennium.diads.com/moxy/ Quick Image Viewer 0.9.1 Quick Image Viewer (qiv) is a very small and pretty fast GDK/Imlib image viewer. Features include zoom, maxpect, scale down, fullscreen, brightness/contrast/gamma correction, slideshow, flip horizontal/vertical, rotate left/right, delete (move to .qiv-trash/), jump to image x, jump forward/backward x images, filename filer and you can use qiv to set your X11-Desktop background. This version works on Solaris/SunOS
again.
|
This release adds copy and
move capabilities, the ability to hide the tools area, cancel thumbnail
generation by pressing Escape, and more.
http://www.klografx.de/
http://www.geocities.com/SiliconValley/Haven/5235/
On September 25, 1998, Digital
Domain instructed Mr. Bill Spitzak to discontinue development of FLTK.
Shortly thereafter a group of developers for FLTK reincarnated the library
on a mirror site so that development could continue. The FLTK web page,
FTP site, mailing list, and CVS server are being hosted by Easy Software
Products, a small software firm located in Maryland. Easy Software Products
develops commercial software and supports free software.
http://fltk.easysw.com/
New in this release of Jim's
fonts for X is a set of alternate NouveauGothic fonts with a more traditionally
shaped ampersand glyph, for those who don't particularly like the style
of NG's regular ampersand.
http://www.ntrnet.net/~jmknoble/fonts/
MathMap is a GIMP plug-in which allows distortion of images specified by mathematical formulas. For each pixel in the generated image, an expression is evaluated which should return a pixel value. The expression can either refer to a pixel in the source image or can generate pixels completely independent of the source. MathMap not only allows the generation of still images but also of animations.
The MathMap homepage can be found at
http://www.unix.cslab.tuwien.ac.at/~schani/mathmap/It includes a user's manual as well as screenshots and examples.
Changes since 0.6:
The following announcement was posted to comp.os.linux.announce by MetroLink, Inc.
NOW AVAILABLE FOR LINUX/ALPHA!The Metro-X Enhanced Server Set from Metro Link is now available for Linux/Alpha. Metro-X provides more speed and more features at a very LOW PRICE!
Metro-X 4.3 is an X11 Release 6.3 server replacement with all the features you need. It provides support for the fastest, most popular graphics cards on the market today. In addition, Metro-X includes touch screen support and multi-screen support at no extra charge! So what IS the charge? Only $39!
===GRAPHICAL CONFIGURATION UTILITY===
Forget hand editing configuration files or clumsy character-based setup utilities. Metro-X 4.3 includes a state-of-the-art graphical configuration program. ConfigX helps you get up and running in no time.
===EXTENSIVE GRAPHICS CARD SUPPORT===
Want support for the latest, highest-performance graphics cards? Then you want Metro-X 4.3. Check the Metro-X 4.3 cardlist for Linux/Alpha on the web site to see which cards are supported, as well as the available resolutions and colors for each. In addition, as support becomes available for new cards between releases, these updates to Metro-X 4.3 will be made available at no charge.
===MONITOR SUPPORT===
Tired of adjusting your monitor or hand-editing timing parameters? With Metro-X you can relax. Just select your monitor from the list and we do the rest. Even adjusting the image is made easy with a graphical adjustment tool. Using the mouse or keyboard, you simply stretch, shrink, or shift the image placement on the monitor!
===TOUCH SCREEN SUPPORT INCLUDED===
At no extra charge, Metro-X 4.3 includes support for several models of touch screens. These include the serial touch-screen controllers from:
===MULTI-HEADED DISPLAY SUPPORT INCLUDED===Carroll Touch EloGraphics Lucas Deeco MicroTouch At no extra charge, Metro-X 4.3 includes support for up to 4 screens per server which can all be controlled simultaneously with a single keyboard and mouse. This allows you to run many applications without overlapping windows. The graphical configuration utility makes it simple to configure multiple graphics cards and even lets you pick the screen layout. You can utilize this feature with many combinations of these cards:
NOTE: Only one Mystique or Mystique 220 (not both) may be used in the combination.Matrox Millennium Matrox Millennium II Matrox Mystique Matrox Mystique 220 ===ROBUST PERFORMANCE===
Reliability and performance are the foundation of Metro-X. Our customers are using Metro-X in demanding applications from the Space Shuttle to the Battlefield. Metro-X 4.3 incorporates advanced dynamic loader technology which eliminates the need to build servers for specific types of graphics cards. It makes server configuration quick and easy.
===METRO OPENGL EXTENSION AVAILABLE SOON===
Using Metro-X's dynamic loader technology, adding an extension like Metro OpenGL is as easy as installing a package and running a program. This product will be available for Linux/Alpha very soon.
===METRO LINK TECH SUPPORT===
As always, software purchased from Metro Link comes with 90 days of free technical support (via phone, fax, or email) and a 30-day money-back guarantee.
===SYSTEM REQUIREMENTS===
HARDWARE: Metro-X 4.3 requires 14 MB of disk space. 8MB of RAM are required; 16 MB are recommended.
SOFTWARE: Packages are provided in both RPM and tar/gzip formats. Metro-X 4.3 requires these minimum versions of software: Linux Kernel 2.0.30; glibc 2.0.5c; and XFree86 3.3.1.
===AVAILABILITY AND DISTRIBUTION===
PRICE: Metro-X 4.3 is $39
AVAILABILITY: Now
DISTRIBUTION: Metro-X is only distributed via FTP. A postscript version of the Metro-X manual is included. With a credit card payment, the FTP instructions are usually emailed on the same day the order is received. Be sure to include your email address when ordering.
===CONTACT METRO LINK===
www.metrolink.com
sales@metrolink.com
+1-954-938-0283 ext. 1
+1-954-938-1982 fax
Version 2.1.6 of the GNU plotting utilities ("plotutils") package is now available. This release includes a significantly enhanced version of the free C/C++ GNU libplot library for vector graphics, as well as seven command-line utilities oriented toward data plotting (graph, plot, tek2plot, plotfont, spline, ode, and double). A 130-page manual in texinfo format is included.
As of this release, GNU libplot can produce graphics files in Adobe Illustrator format. So you may now write C or C++ programs to draw vector graphics that Illustrator can edit. Also, the support for the free `idraw' and `xfig' drawing editors has been enhanced. For example, the file format used by xfig 3.2 is now supported.
RPM's for the plotutils package are available at ftp.redhat.com and at Red Hat mirror sites. The following are available:
ftp://ftp.redhat.com/pub/contrib/i386/plotutils-2.1.6-1.i386.rpmFor more details on the package, see its official Web page, http://www.gnu.org/software/plotutils/plotutils.html .
ftp://ftp.redhat.com/pub/contrib/sparc/plotutils-2.1.6-1.sparc.rpm
ftp://ftp.redhat.com/pub/contrib/SRPMS/plotutils-2.1.6-1.src.rpm
I hope you find this release
useful (send bug reports and suggestions for enhancement both to bug-gnu-utils@gnu.org
and to me). Enjoy.
Robert S. Maier - rsm@math.arizona.edu
This library is designed to make it easy to write games that run on Linux, Win32 and BeOS using the various native high-performance media interfaces, (for video, audio, etc) and presenting a single source-code level API to your application. This is a fairly low level API, but using this, completely portable applications can be written with a great deal of flexibility.
SDL has been split into a stable release, 0.8.x, and a development release, 0.9.x. The stable version is very robust, having been extensively tested over the past 3 months. The development version has some exciting features in progress, such as automatically adjusting to display changes, CD-ROM support, and more.
Get it now from: http://www.devolution.com/~slouken/SDL/
GIMP lovers, grab your brushes!
The "Official SDL Logo Contest"
is now in session. Send your entries, or instructions on downloading
your entries, via e-mail to slouken@devolution.com
The winner will get his or her logo on the SDL web site, and will get their
names in the CREDITS list for the next version of SDL! You can view
the contest entries at http://www.devolution.com/~slouken/SDL/contest/.
And if you're wondering.. "what can I really do with SDL?", be sure and download the examples archive, which contains demontrations of:
Enjoy, Sam Lantinga
<slouken@devolution.com>
It is available from http://muon.kaist.ac.kr/~hbkim/linux/tkscanfax and also from the apps/graphics/capture directory at Sunsite.
There is no documentation
at this time. Please send questions, problems, comments or suggestions
to Hang Bae Kim <hbkim@muon.kaist.ac.kr>
MAM/VRS is a library for animated, interactive 3D graphics, written in C++. It works on Unix (tested on Linux, Solaris and Irix) and Windows 95/98/NT. MAM/VRS can produce output for many rendering systems: OpenGL (or Mesa), POVRay, RenderMan and VRML are supported. It provides bindings to many GUIs: Xt (Motif/Lesstif/Athena), Qt, Tcl/Tk, MFC and soon Gtk. It is covered by the terms of the GNU LGPL. Visit our homepage for more information and to download it:
http://wwwmath.uni-muenster.de/~mamThough this is the first public announcement, MAM/VRS has been in active development and use for a long time and is stable. MAM/VRS is not a 3D modeler or a 3D CAD/CAM program, but it ...
There were a slew of announcements from Xi Graphics posted to comp.os.linux.announce this past month. I've globbed them together here under a single section.- 'Muse
SiS 5598 support
Xi Graphics made SiS 5598 support available on September 14th, joining the previously supported SiS 6326. Configurations for 1, 2, 3 and 4MB are supported.NeoMagic MagicMedia, including Toshiba Tecra 800 and the Gateway Solo 5150The SiS 5598 is not capable of supporting overlays but does support hardware gamma color correction in all color depths. Maximum supported resolution is 1600x1200@60Hz in 8bpp, and 1024x768@75Hz in 24bpp (packed) operation. Hardware cursor is supported in all color depths. The Accelerated-X Server conforms to the X Window System as measured by the freely available X Test Suite.
The update, which should be applied against the desktop (AX version) Accelerated-X Server version 4.1.2, is available from the Xi Graphics Anon-FTP site at URL:
ftp://ftp.xig.com/pub/updates/accelx/desktop/4.1.2/D4102.013.tar.gzInstructions for applying the update and more detail may be found in the URL:ftp://ftp.xig.com/pub/updates/accelx/desktop/4.1.2/D4102.013.txtThe update may be applied to the freely available Accelerated-X demo at URL ftp://ftp.xig.com/pub/demos/AX412.Linux.tar.gz for customer testing prior to purchase of the product.
Xi Graphics is pleased to announce the release of support for the NeoMagic MagicMedia 256AV, also known as the NM2200 and the NM2360. This is much faster than previous NeoMagic chipsets as the new benchmarks show. The initial machines explicitly supported are the Toshiba Tecra 800 and the Gateway Solo 5150.Accelerated-X/OGLBenchmark tests were conducted on a Toshiba Tecra 8000 with a Pentium II/266 Mhz processor, making the results broadly comparable with those for the ATI Rage LT Pro announced in August. The Accelerated-X Server, Xaccel, passes the X Test Suite with hardware acceleration in all color depths.
Xi Graphics, leader in X Window System technologies for Intel Linux and UNIX Systems will be shipping a limited quantity edition Technology Demo of its' new Accelerated-X/OGL product.Monthly drawings for copies of Accelerated-XAccelerated-X/OGL is the fifth architecture generation of Accelerated-X and has been specifically altered to provide support for a wide range of 3D graphics chips. The limited edition Technical Demonstration release offers an opportunity for games and other developers to influence the final delivered product. The Accelerated-X/OGL Technology Demo Evolution 1 product features:
For this limited edition of the product, please contact devrel@xig.com to apply for a copy. The Xi Graphics Sales are unable to take orders for this product as we expect the demand to significantly exceed the supply!
- Only available for threaded OS's:
- Linux/glibc2
- Solaris/x86 2.51, 2.6
- UNIXware 7
- Fully accelerated 2D Server, X11R6.4 specs
- OpenGL 1.1.1 libraries, GLX Server extension
- Hardware accelerated 3D support for:
- Number 9 Revolution IV aka Ticket to Ride IV
- Intel 740
- Real3D StarFighter,
- Diamond Stealth II G460
- etc
- 3Dlabs Permedia 2
- Diamond Fire 1000 GL Pro
- ELSA Winner 2000/Office
- etc
- 3Dlabs Glint MX & Delta
- ELSA Gloria-XL
- Leadtek L2520
- etc
- SiS 6326
- ASUS AGP-V1326
- etc
- 8, 15/16, 24/32 bpp if supported by hardware
- Overlays if supported by hardware
- multiple concurrent 3D windows, not just full screen
- Tablet support (Wacom, Summagraphics protocols)
- Spaceball (6DoF) support for Logitech Magellan
Xi Graphics is pleased to announce that we're giving away free copies of the industry leading Accelerated-X Display Server. These are full, up to date, legal and supported copies of the product.To register to win one of the two free copies we're giving away every month, either register for the monthly draw on our web site (http://www.xig.com) or send email following the directions published below. We do, of course, have a motive for this. We want to know what Graphics Board, Monitor, Input devices and Operating Systems you'd most like to use with Accelerated-X and we want to find out about the kind of machine you use.
To enter the drawing by email, you must complete an entry form for that months draw. Send the email to Andrew Bergin (abergin@xig.com) with the subject "Free Draw Entry" and include in your message the following details:
Your NameThere will be a new draw every month. You must enter each month to be eligible for that months drawing. Only one submission per person, per drawing will be accepted.
Your Company or Organisation
Your Shipping Address
Your Email Address
(Optional) Your Web Site Address
What make and model of computer, and processor speed
The Graphics Card you'd like to use
The Monitor you'd like to use
Your preferred pointing devices
Your preferred operating systemThe winners name may be put on our web site and in other promotional material. Information collected, including your email and physical shipping address, will not be sold to any third party. We may use your address for our infrequent mailings unless you write to ask us to remove your name from the mailing list.
Xi Graphics will pay the cost of shipping only, including international shipping. Any and all customs charges and/or taxes are the responsibility of the winner.
© 1998 by Michael J. Hammel |