|
Table of Contents:
|
|||||||||
TWDT 1 (gzipped text file) TWDT 2 (HTML file) are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version. | |||
This page maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright © 1996-2000 Specialized Systems Consultants, Inc. | |||
The Mailbag!Write the Gazette at gazette@ssc.com |
Contents: |
Answers to these questions should be sent directly to the e-mail address of the inquirer with or without a copy to gazette@ssc.com. Answers that are copied to LG will be printed in the next issue in the Tips column.
Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there.
Viddy these well, little bruthers, viddy well. It would seem that our friends 'ere, like, ave a problem with their Linux boxes. If thou woulds't be so kind as to, like, give them a little 'and, I'm sure they would love it, real horror-show like. But first me little droogies, an introduction mabye in the necessary. My name is Michael Williams and I live in the UK (Wales). As of now, I will be helping to format the mailbag's columns. What's with all the blurb you ask? Do we all speak like that it Wales. No! I'm actually basing my character on Alex from "A Clockwork Orange".
Fri, 5 May 2000 12:49:43 +0200
From: Joseph Simushi <jsimushi@pulse.com.zm>
Subject: New User - Help
Hi,
I am a new user of Linux and is running Red hat version 6.1.1
operating system. I am asking for your help in finding Materials
(Books, websites, CD-write-ups) on Linux to help me in the
Administering of this system. Regards,
Simushi Joseph
LAN Administrator
PULSE Project
P.O. Box RW 51269
Lusaka
Zambia.
Fri, 5 May
2000 15:24:48 +1000
From: Eellis <abacus2@primus.com.au>
Subject: Prepress Rip software
Hi i like to find out is there any 3rd party or shareware
rip-software to use on postscript and pdf files instead of using
scitex or adobe rip.
Many Thanks From Down under.
Ezra Ellis
Thu, 04 May
2000 20:35:28 -0400
From: Raul Claros Urey <raul@upsaint.upsa.edu.bo>
Subject: Help
I'm using linux red hat, kernel 2.5.5 and when it is booting this
report the fallowing errors:
RPC: sendmesg return error 105 Unable to send; errno = no buffer space available NFS mountd: neighbour table overflow Unable to register (mountd, 1, udp) portmap: server localhost not responding, time out
And I can't do anything the message "neighbour table
overflow" appears every time. do you know something about
it?
Atte.
Raul Claros Urey
Thu, 4
May 2000 04:34:58 EDT
From: <ERICSTMAUR@aol.com>
Subject: Password recovery for equinox database
Do you know if I can find a software which recovered my password
on a equinox database?
Mon, 01
May 2000 20:22:18 +0800
From: 61002144 <61002144@snetfr.cpg.com.au>
Subject: resolution
My comuter under linux redhat xwindow will only run 300x200
graphics. Even if I hit CTRL ALT + ,
it wont change. I have a SiS620 Card with 8mb. Can you please
help. I have spent a lot of time on the internet, It seems other
people have the same problem but no one can help.
Rudy Martignago
Sat, 29 Apr 2000 01:09:12 +0100
From: Andy Blanchard <andyb@zocalo.demon.co.uk>
Subject: Help wanted - updating a Linux distro's ISO image
While downloading the newly posted kernel updates to RedHat
6.2 the following question arose in my mind the answer to which
might be of use to anyone who has to build numerous Linux boxes.
If one were to replace arbitrary RPM (or DEB, or ...) files on
the distro with updated versions and burn a new CDROM - would it
still install cleanly? If I understand this correctly the answer
to this is "yes" if the installer runs:
rpm -install kernel-docBut will be "no" (unless you can frig the install script) if it runs:
rpm -install kernel-doc-2.2.14-12.i386.rpmCan anyone give me an answer? An inquiring mind wants to know...
Sat, 6
May 2000 13:08:20 +0200
From: Drasko Saric <doktor@beotel.yu>
Subject: trouble with full partitions...
Hi, I have a problem and I hope you'll help me. I have Linux SuSe
6.1 and WIN98 on one machine. Linux partition of 800MB is full
now, and I wish to add (if it's possible) an extra 800 MB to my
exisiting Linux partition from WIN98part, but I don't know how.
Can you help me?
Thanx in advance. Drasko,
Belgrade, Yugoslavia
Sun, 07 May 2000 00:35:12 -0500
From: edge or <edge-op@mailcity.com>
Subject: Help in setting up Red Hat as a dial-up server -- LG#53
I have searched and searched for 2 months now and
can not get any info on how to set up a server for customers to
dial into and access the internet with mail accounts and such. I
have been to every news group and discussion I can find. No one
will give any information on how to set this up. The ONLY help or
answer I get is...:"why do you want to be an ISP,
they are to expensive to set up?" Please have a
"How-To" for the beginner
to set up an ISP for the first time?
Thanks in advance.
First, I hope you've received better answers in the
meantime. Second, I
hope the following links helps (apologies for the mailcity
redirection
stuff):
http://alpha.greenie.net/mgetty/
Notice about midway down the page there are links
specifically related to
your question. This will get your callers connected to your
box.
http://www.tuxedo.org/~esr/fetchmail/index.html
This is nice for grabbing the mail and handing it off to
sendmail for
local delivery.
As for sendmail configuration, I'm clueless.
You could also check out the this howto http://www.linuxdocs.org/ISP-Setup-RedHat.html. It's meant for Red Hat systems, but I'm sure it could easily be adapted for another distribution with little difficulty.
Mon, 8
May 2000 11:22:19 +0100
From: Steven Cowling <steven.cowling@sonera.com>
Subject: bread in fat_access not found (error from Redhat 6)
The following error was scrolled down the screen:
bread in fat_access not found
We are running Redhat version 6 on a PC and it has been running fine for about 6 months. The latest work done has been to start using CVS which was installed with the initial installation of Redhat. CVS has been working fine for about a week. Since the bread error appeared we are unable to login either at the console or remotely using telnet from windows. Every time we try to login the "login incorrect" error appears. We have tried all user names and root. The strange thing is that we can still use CVS from our Windows machines using WinCVS 1.0.6 to login and check files in and out. Basically we can't login normally at all. Has any body seen this before? Or know what 'bread' is? Any help or suggestions would be greatly appreciated.
Steve Cowling
Mon, 8
May 2000 13:50:15 -0500
From: <Stephen.W.Thomas@Nttc-Pen.Navy.Mil>
Subject: High Availability Hosting
In you latest issue of Linux Gazette you have an article titled
"Penguin power credited for 100.000% network
availability". This article mentions about different Classes
of Web Hosting based on Uptime. Where on the net can I find a
definitive source for these different classes?
Thanks,
Steve.
Wed, 10
May 2000 09:49:25 -0400
From: Ruven Gottlieb <igadget@earthlink.net>
Subject: Redirecting kdm output from console to file
Hi,
I've been trying to figure out how to redirect console output
from tty1 to a file when starting kdm.
I use an alias:
startx="startx >& /root/startx.txt"with startx to send output to /root/startx.txt, but I can't figure out what to do to get the same thing to happen with kdm.
Tue, 09
May 2000 23:22:52 +0530
From: "pundu" <
pundu@mantraonline.com>
Subject: calculate cpu load
Hi,
I would like to know how one can calculate cpu load and memory
used by processes as shown by 'top' command. It would be nice if
anyone can explain me how you could do these by writing your own
programs, or by any other means.
Mon, 8 May 2000 10:32:25 +0000
(GMT)
From: Jimmy O'Regan <jimregan@litsu.ie>
Subject: Is There a Version of PC/NFS for Linux?
I have the O'Reilly book Managing NFS and NIS and there is a
section in the back of the book called PC/NFS describing a Unix
utility that enables a PC DOS machine to access a Unix machine
using the NFS file system as an extended DOS file system. I am
wondering if there is a Linux version of this available?
J.
[As far as I was aware, that program is an NFS client for the PC - it runs
on DOS, and lets you use NFS from a remote UNIX box. If I'm right, the
standard version should work with Linux.
You'd be better off setting up Samba though. It does what you're looking
for - makes Linux look like an MS server. This would be better for Ghost,
as Ghost works on MS shares. -Alex.]
Mon, 1 May 2000 08:57:32 -0700
From: lisa simpson <rflores@pssi-intl.com>
Subject: Mandrake and tab
When you hit the TAB key under a shell in Mandrake, it gives a
list instantly unlike in Redhat where you have to hit tab twice,
if there are several similar entries. How do I disable that
behavior?
[Look in the shell manual page under the options section. There are
options you can set that control this behavior. I don't remember
the names offhand, and it's different for each shell. -Ed.]
Thu, 11
May 2000 08:47:38 -0700
From: <agomez2@axtel.com.mx>
Subject: Installation of Linux using an HP 486/25NI
Hello,
I hope that you can help me, I´m new to linux and I´m trying to
install it using an HP 486 25 MHz.
The BIOS does not has the capability to recognize a second IDE
drive. (I have upgraded the BIOS to the latest available version
from HP website support)
The Motherboard has an integrated NIC, (I also have a 3COM
3c509).
I can not find the way to start the installation, since I have
Linux Mandrake as well as Turbolinux on CDROM.
I have tried doing it using the CDROM of my second PC running
windows 98, with cisco TFTP server and a local LAN between both
PC´s using a coax. 10 base2 cable.
Where can I find a detailed explanation with some suggestions to
my problem? The manuals included with my linux flavours are not
detailed enough, they assume that I have a CDROM for linux
installation
How about an installation from a FTP site?, where can I find some
DETAILED information about that?
Thanks in advance for your help!
Sincerely,
Alex
Thu, 11
May 2000 08:36:27 -0700
From: NANCY Philippe <Philippe.NANCY@UCB.FR>
Subject: Energy star support
Last year I bought one of these cheap(er) east-asian PC computers
(like many of us ?) with the Energy Star feature (i.e. No more
need to press any button to power off).
But this feature is implemented with M$ Win... and I've no idea
of the
way they manage the hardware behind this process.
So, as I recently installed a Corel distribution, I would like to
know if there is any mean to power off directly from Linux, and
not Shutdown-And-Restart, Open-M$Windows and Quit-From-There (and
Decrease-My-Coffee-Stock ;-} )
Thank you for your help.
Fri, 12
May 2000 07:02:37 -0700 (PDT)
From: Surfer PR <SurferPR1@excite.com>
Subject: Help with Voodoo3
I have followed every instruction I could find on how to install the voodoo3 and MESA and all the 3d tests run just fine... but when I try to run any game that uses glide or mesa (quakeII) I try all the renderes but it does not work and continues to use the very lame software.... I have all my resolutions set... I am mainly having problems with the glide2x.so or something like that.. everything else in Linux (Mandrake 7.0) is fine...
Please help me.
Mon, 15 May 2000 08:53:49 -0700
From: "VanTuyl, George" <George.VanTuyl@voicestream.com>
Subject: Backup to a CD- Re-Writeable drive
I have been asked to put together a backup strategy for the
company's Red Hat 6.1 Linux gateway server. The backup medium
chosen "not by this individual" is a HP 9200I parallel
port CD re-writeable drive /burner.
I would like to here some reflections and recommendations on this
strategy please.
Thanks gvt.
Mon, 15 May 2000 12:33:44 +0100
From: <marco.brouwer@nl.abb.com>
Subject: image
Hi,
Do you know where I can get the "Dont fear the Penguis"
logo names : linux-dont-f.jpg Or can you send it me...
cu
Wed, 17 May 2000 13:15:28 -0400
From: "Jeff Houston" <jhouston42@worldspy.net>
Subject: Video help
Howdy I have 2 problems. First and foremost I believe is when I
boot up linux none of my window managers will work. In my old
computer they did but not in my new one. I think it is because my
graphics card is not compatible but not sure about that I know
that it is not listed in setup but neither was my graphics card
in my other computer. Anyway I went to the website of my graphics
card and they had a file to supposedly add support for my card to
linux but how do i go about installing it? It is gzipped and to
be honest I have no clue where I am or what I am doing once I get
logged in to linux without any of the window managers. I have
only had redhat about 3 days now:) Anyway I have the file I
supposedly need on a floppy, but dont have any idea what to do
with it now. Alos after I installed RedHat for some reason Win98
became EXTREMELY slow and is giving me probs and a lot of
programs not responding any idea why this is?
Thanks for any and all help you can give me.
signed, NEWBIE
[It would seem that you are -extremely- confused here Jeff. It would appear you have no idea how to use the BASH prompt. Obviously, you need to read up upon the subject - http://www.linuxdoc.org has a variety of tutorials and howtos for Linux. Have you tried running 'Xconfigurator' (remember folks, it's case sensative)? See if your graphics card is listed there. To unzip a file that's gzipped, use the 'gunzip' command. That's about as much as I can tell you, since you do not provide enough information. As for your Win98 slowdown problem, I really see no link between installing Linux and that type of problem. Mabye I'm wrong, or mabye it's just you being a bit paranoid :) -Alex.]
Wed, 17
May 2000 10:12:38 -0700
From: "Jeffrey X" <krixsoft@hotmail.com>
Subject: "run of input data" error
I recently compiled the RedHat kernel 2.2.12-20. Everything
went well and I can start new kernel from lilo. Lilo.conf
looks like:
....<
image=/boot/vmlinuz label=linux root=/dev/hda5The problem I ran into is that I copied "bzImage" to "/boot/vmlinuz", ran lilo, and rebooted the system. When I tried to start new kernel with label "linux", the system halted with the following messages:
image=/usr/src/linux/arch/i386/boot/bzImage label=new root=/dev/hda5 ....
"Loading linux....." "Uncompressing Linux......" ran out of imput data" "-- System halted"Why? Where is the problem ? I had a 128MHz phsical RAM and 256MHz /swap.
17 May 2000 13:12:55 -0000
From: "narender malhan" <malhan@rediffmail.com>
Subject: linuxsoftwareraid HELP
Dear Sir,
I want to configure my linux box for mirroring(RAID1) with SCSI
cards. I
want help or HOWTO documents regarding this.
Hope u 'll reply soon,
waiting for an early reply,
yours,
singh.
Mon, 22 May 2000 11:56:08 +0200
From: REVET Bernard <bmrevet@igr.fr>
Subject: VIRUSES on the Net !!!
Many articles have been written in the press concerning the virus
"I love You " and similar one
It would be appreciated to have a general article in the Linux
Gazette about the problem of viruses as many computers have both
Microsoft Windows and Linux installed . What are the protections
of Linux against virus intrusions ? What
differentiates Microsoft OS from Linux concerning this problem?
Is it safe or reasonable to continue to use Microsoft Windows as
it costs so much to the community to get rid of these viruses? To
these financial worries one can add updating systems 95 Versions,
98, Millenium , plus WWW browser plus bugs plus plus.
Bernard
[I'm sure it's been said before, many, many times. But, just for the point of clarity, I'l say it again. Virii (viruses) are virtually a non-issue in Linux, especially those like the love bug. I myself have never expereinced that particular virus, but I've read about Linux users who have, and, after using a bit of common sense, I've come to the conclusion that it could not affect a Linux box. Why? The love bug is a Visual BASIC script designed to run on Windows computers. Under Linux, you could just download the script and read it, without it doing any damage to your system. Most virii will have little affect on Linux, most are Windows-centric, and only designed to run under the aforementioned GUI. There are virus scanners available for Linux, and true, there are Linux specific Virii. However, I wouldn't waste the time of the download if I was you - the odds of you getting one are -extremely low-. Thankyou, and goodnight. -Alex.]
Sat, 20 May 2000 12:02:59 -0500 (COT)
From: Servicios Telematicos <servicios@r220h253.telecom.com.co>
Subject: Missing root password
Hello
I use linux red hat 6.1 but my friend Fabian change the lilo
configurations and the root password. Please help me.
I need change the lilo configuration and root password.
Thanks,
victoriano sierra
Barranquilla
Colombia
Thu, 11 May 2000 13:19:41 -0500
From: Juan Pablo <j_pablo18@yahoo.com>
Subject: Linux
Hello, I want to know if there is books, texts, etc. of linux in spanish . Where explains HOW TO USE it? Thanks!!!
[See below about an upcoming Spanish translation of the Gazette. Also, the Linux Journal site has a section listing the Linux Users Groups in many countries. Perhaps you can find one near you. Where are you located? http://www.linuxjournal.com/glue . -Ed.]
Thu, 11 May 2000 13:19:41 -0500
From: Warren <warren@guano.org>
Subject: Spiro Linux
Were you ever contacted by someone at Spiro Linux? I am searching for information on the distribution, but the published website, http://www.spiro-linux.com, is not answering.
No, I haven't. The domain doesn't exist now. You can try a search engine. I'm printing this in case one of our readers knows.
I called the number for SPIRO-Linux, +1 (402) 375-4337, and an automated attendent identified the company as "Inventive Communications".
Web searches turn up a lot of reviews, but no news on what happened to the company.
Thu, 11 May 2000 13:19:41 -0500
From: <jshock@linuxfreemail.com>
Subject: Windoze 98 under WINE
I know wine is meant for running windows applications, but is it also possible to just run windows 98 from within linux using Wine? I tried to run win.com with wine, but i got a dosmod error of some sort. If it is possible to run windoze 98 under linux WINE then please tell me how; thanx in advance.
Thu, 11 May 2000 13:19:41 -0500
From: Eric Ford <eford@eford.student.princeton.edu>
Subject: read-only -> read/write
Back when I ran NetBSD, there was a way I could mount (or link?) a directory from a read-only medium (CD-ROM, NFS that I only have read permission for, etc.) to a directory on my hard disk as read-write. If I added a file to the directory, it would be stored on my hd. If I modified a file, then it would save my version on my hd and transparently use that version rather the version on the ro medium. If I deleted a file, it stored something locally so it knew to make it appear as if that file wasn't there.
Can I do this in Linux? If so, how?
Thu, 11 May 2000 13:19:41 -0500
From: Amrit Rajbansh <amrit_101@rediffmail.com>
Subject: remote login methods
My workstation has presently a damaged hard disk is there any provision that i can directly boot from the server using a linux bootable floppy,instead of installing a new hardisk in the work station
waiting eagerly for your reply
Sat, 29 Apr 2000 13:02:46 +0200
From: Jan-Hendrik Terstegge <helge@jhterstegge.de>
Subject: Linux Gazette - German translation
Hi folks.
I love the Linux Gazette, but the last time I think there are more and more Linux users in Germany who didn't speak english (yes it's possible to use Linux without speaking English. The SuSE does a very good translation) or have really problems to speak it. I think most of them want to learn a lot about Linux, but there are not so much german-languaged Pages. So I think if would be nice if there are some guys speaking english and german very well who help me to translate the Linux Gazette.
[As you know, we very much like to see versions of the Gazette in other languages. If you can translate a few articles per issue and put them up on a web site, that will be a start. Perhaps seeing the articles there will encourage some other people to offer to help. Remember to add your site to our mirrors list using the form at the bottom of http://www.linuxgazette.com/mirrors.html-Ed.]
Wed, 3 May 2000 08:33:41 -0700
From: Karin Bakker
Subject: Re: Linux gazette in a German version
How can I get the gazette in a German version ?
[A German-speaking reader or group will have to translate it and host it on their web site. This is how all our foreign-language mirrors work.Just this week I get a letter from somebody who may be willing to translate part of it but he's looking for others to do some of the work. Let's see if I can find his e-mail address... Here it is: Jan-Hendrik Terstegge <helge@jhterstegge.de>
Would you like to speak with him and see if you guys can figure out how to get a translation off the ground? -Ed.]
Sun, 30 Apr 2000 10:14:36 -0400
From: usrloco <usrloco@userlocal.com>
Subject: userlocal.com
I just wanted to thank you for listing my site (userlocal.com) in the May issue of Linux Gazette.
Mon, 1 May 2000 21:58:34 -0500
From: Brad Schrunk <schrunk@mediaone.net>
Subject: SuSE Linux and Microsoft medialess OS
Dear Linux Supporters:
I have started playing around with SuSE Linux and am impressed with the product. I have been a died in the wool Microsoft user for the last eight years. I have seen them step on a lot of folks and that is part of business. I have also put up with their mindless CD keys that make a network administrators life miserable. Not copy protected is what it said on all of their software. That was until they controlled the market now everything is copy protected.
But the latest rumor or plan that Microsoft has put me over the edge. I read the an article in the May 1, 2000 issue of INFO WORLD that Microsoft now wants to jam a "medialess OS" down our throats. The article is entitled "Users find Microsoft's medialess anti piracy play hard to swallow" explains their latest attempt to stop software piracy. This is it for me.
I have been an ardent supporter up till this. I want to convert to something else. The problems are my word, access and other apps that use MS apps. Is there a way to continue to use these apps without Microsoft OS. Or is there a way to emulate win apps or is there other apps that transparently use their files? Any help would be greatly appreciated.
Wed, 3 May 2000 21:02:05 +0200
From: Alan Ward <award@mypic.ad>
Subject: RE: Here comes another article
Just a line to mention I liked a lot the new look. You also did well to put the programs in separate files.
Sat, 06 May 2000 01:53:15 -0400
From: Charlie Robinson <crrobin@iglou.com>
Subject: New Logo
Sir,
I am very excited about Linux and the work that you and your staff perform. Because I am very much a "newbie", I turn to your web site religiously every month. Thanks for all of the hand holding and the impressive looking new logo - I like it.
Thu, 18 May 2000 11:28:04 +0100
From: Paul Sims <psims@lombard.co.uk>
Subject: new logo
Nice new logo - well done!
Mon, 08 May 2000 16:42:12 +1200
From: Linux Gazette <gazette@ssc.com>
Subject: Rsync
Ewen McNeill <ewen@catalyst.net.nz>
and others wrote in about difficulties mirroring LG after we installed
wu-ftpd. In response, we have installed anonymous rsync
also.
Many people
find rsync
more convenient to use than mirror
, and
it also has the advantage that it transfers only the changed portions of files,
saving bandwidth.
Hints for using rsync
with Linux Gazette are in the
LG FAQ, question 14.
-Ed.
Tue, 09 May 2000 01:46:50 -0500
From: Felipe E. Barousse <fbarousse@piensa.com>
Subject: Spanish translation of Linux Gazette
Sirs:
I noticed on your mirrors list that there are "none known" translations to Spanish of Linux Gazette.
We are a Linux consulting firm based in Mexico City and with operations all across Latin America and the Caribbean.
We would like to take the task of translating LG into Spanish. We are able to coordinate a team of technical translators, Linux / Unix specialized and, eventually, when translated, host those pages in our web site.
I would like to know your opinion about this idea and, if approved, make all required arrangements for this to happen. We are also open to discuss any other outstanding issues to accomplish this project.
Hoping to hear from you soon.
[The translation is expected to go live on June 1 at http://www.piensa.com. The site has been added to the mirrors list. -Ed.]
Wed, 10 May 2000 13:50:05 -0400
From: Aurelio_Martínez_Dalis <aureliomd@cantv.net>
Subject: Suscribing Information
My Name is Aurelio Martinez (aureliomd@cantv.net).I am a linux beginner, and I have access to Internet only by e-mail. Is it possible to receive Linux Gazzete in HTML format by e-mail ? Thanks.
[Quoting from the LG FAQ:
The Gazette is too big to send via e-mail. Issue #44 is 754 KB; the largest issue (#34) was 2.7 MB. Even the text-only version of #44 is 146 K compressed, 413 K uncompressed. If anybody wishes to distribute the text version via e-mail, be my guest. There is an announcement mailing list where I announce each issue; e-mail lg-announce-request@ssc.com with "subscribe" in the message body to subscribe. Or read the announcement on comp.os.linux.announce.
You'll have to either read the web version or download the FTP files.
I asked our sysadmin whether we could set up a mailing list for the
Gazette issues themselves, and he was unwilling, again because of the size issue. Many mail transport systems are configured to reject messages larger than 1 or 1.5 MB. "And I don't want my sysadmin mailbox stuffed chock full of bounced 4M emails."Note to mirrors:
We receive at least one request a month to send the
Gazette via e-mail. So there is definitely reader demand for it. If you wish to offer Gazette via e-mail, you would need to send out the current issue's FTP file along with lg-base-new (the changed shared files) every month. Users would somehow need access to lg-base (all the shared files) when they subscribe and whenever they reinstall the Gazette. I don't know how you would handle changes to the FTP files later (i.e., corrections to back issues). -Ed.]
Wed, 17 May 2000 10:23:37 +0300
From: Shelter-Afrique <info@shelterafrique.co.ke>
Subject: Compliments
Thanks 4 maintaining this great magazine - its been really helpful!
D. S. Daju
[Thanks for the vote of encouragement. -Ed.]
Mon, 22 May 2000 09:11:22 EDT
From: <LFessen106@aol.com>
Subject: Kudo's
Hello! My name is Linc Fessenden and I first want to congratulate you on an outstanding magazine! I also happen to run a Linux User Group (Lehigh Valley Linux User Group) in eastern Pennsylvania. We were wondering if you might be willing to donate any promotional item(s) that we could give away at a meeting to help increase Linux enthusiasm and awareness, and also to promote the Gazette? Please let me know, and keep up the great work!
[Thanks for the feedback. We do not currently have any Gazette-specific merchandise. I have forwarded your request to our GLUE coordinator (GLUE = Groups of Linux Users Everywhere, http://www.linuxjournal.com/glue) who can give you further info. -Ed.]
Sat, 20 May 2000 12:55:34 +0200
From: Maciej Jablonski <maciekj@pik-net.pl>
Subject: a comment about page
On Polish version version of on-line Linux Gazette there are empty sides, for example: Accessing Linux from DOS!?
[Could you please send me some URLs that have the wrong behavior so I can see what the problem is? The master copy (all the way back in issue #1) is coming up fine.Each mirror is responsible for its own site. We do not update the mirrors centrally. -Ed.]
Sun, 30 Apr 2000 06:14:54 +0200
From: Meino Cramer <user@domain.nospam.com>
Subject: Moonlight-Atelier 3D ... sigh
Dear Editor!
In on one of the articles of issue 53 of the Linux Gazette Moonlight Atelier 3D is mentioned as a 3D-modeller and raytracer.
Unfortunately this program has been taken from the WEB for what reason ever.
Please take a look at www.moonlight3D.org.
I have used this program before its "shutdown" and I am really sad, that there is neither suppport nor any updates any more.
May be you can achieve some informations about this case ?
Thank you very much for your help and for the Linux Gazette!
Contents: |
The June issue of Linux Journal is on newsstands now. This issue focuses on People Behind Linux.
Linux Journal has articles that appear "Strictly On-Line". Check out the Table of Contents at http://www.linuxjournal.com/issue74/index.html for articles in this issue as well as links to the on-line articles. To subscribe to Linux Journal, go to http://www.linuxjournal.com/subscribe/index.html.
For Subcribers Only: Linux Journal archives are available on-line at http://interactive.linuxjournal.com/
Best Linux 2000 R2-Moscow is a Russian-language version of the Best Linux distribution, which is also available in English, Swedish and Finnish.
Las Vegas, NV May 9, 2000 Today at Networld+Interop (N+I), Axis Communications is demonstrating a new wireless solution that provides broadband access to the Internet and LANs for a wide range of emerging wireless devices. General availability is expected in the fourth quarter. The Bluetooth Access Point will be used to create local "hot spots," areas where instant wireless broadband access to the Internet or a network is available to Bluetooth enabled devices, such as cell phones, PDAs, laptops and emerging Webpads. These hot spots will enable new and innovative services for a variety of user environments, in the office, home, hotels, retail establishments and other public places such as the airport.
In the hotel of the future, while you check into your room, your laptop checks into the office - retrieves e-mail, voicemail and accesses corporate Intranet services - all with broadband speed. Phone calls will be routed automatically via telephony services to your personal mobile phone, providing one number simplicity and lower-cost phone bills. The hotel will offer new conveniences: such as easy wireless faxing and printing from anywhere in the hotel to the business center, poolside food service ordering and streamlined checkout payment all from your PDA.
The Bluetooth Access Point from Axis is the first to support both data and voice services. The product platform is based on Axis' integrated system-on-a-chip technology and embedded Linux, which includes a Bluetooth stack for Linux developed by Axis and recently released under GNU General Public License (GPL) to the open source community.
OTTAWA, Ontario - May 2, 2000 - Newlix Corporation announced today a strategic relationship with 3D Microcomputers Wholesale and Distribution to market its Newlix OfficeServer, a Linux-based network operating system.
Newlix is focusing on building an outstanding array of 'set-and-forget' performance features into a reliable, cost-effective network operating system, which runs on standard Intel-based hardware. The company's flagship product, Newlix OfficeServer, is a robust network operating system which features plug-and-play software installation coupled with easy-to-use, web-based configuration tools.
3D Microcomputers is the largest Canadian-owned manufacturer of computer systems. The company provides products and services to 6,000 computer resellers across Canada.
Ottawa, ON - May 3, 2000 - Newlix Corporation and Look Communications Inc. today announced a marketing partnership to promote the use of Newlix OfficeServer, a turnkey Linux-based network operating system for small and mid-sized businesses looking for secure, company-wide Internet access.
Look Communications is a leading wireless broadband carrier and one of the largest Internet Service Providers in Canada. The Newlix OfficeServer will be included in a host of Web-based applications Look offers to support business Internet requirements.
Newlix Corporation (www.newlix.com) is a privately funded company headquartered in Ottawa, Ontario and founded in 1999. Corel Corporation (Nasdaq: CORL; TSE: COR) is an investor in the company. Newlix develops software for an easy-to-use Linux-based network operating system that meets the networking and internetworking needs of small to medium-sized businesses and provides OEMs, VARs and other partners with the essential building blocks to custom tailor networking solutions. The company's flagship product, Newlix OfficeServer, provides a robust, worry-free, 'set-and-forget' communications and networking platform, designed to be delivered in partnership with hardware vendors, connectivity providers and application service providers.
Red Hat releases 64-bit Itanium Linux (ZDnet article)
(
Official press release from Red Hat)
RESEARCH TRIANGLE PARK, N.C.--April 25, 2000--Red Hat, Inc., announced today that it is now taking orders for developer tools and services for the embedded Linux market. The Red Hat Embedded DevKit (EDK) begins shipping immediately and answers the demand for open source software and tools in the growing embedded space, which includes Internet appliances and handhelds.
The Red Hat EDK provides an integrated development environment (IDE) to deliver software developers everything needed to quickly and easily create embedded Linux applications on a wide spectrum of pervasive computing platforms. The targeted markets include manufacturers who are building Internet infrastructure appliances and consumer Internet appliances, as well as the traditional telecom, datacom, industrial and embedded enterprise markets.
The Red Hat Embedded DevKit is a completely open source software package and is sold via redhat.com with varying levels of services starting at $199.95.
A key advantage to the Red Hat Embedded DevKit is access to the premium support services that Red Hat has pioneered in the open source space. Red Hat Support customers receive assistance on the usage of the Embedded DevKit and response to questions about embedded Linux. In addition, customers are entitled to priority response on corrections to any EDK or kernel problems they submit. This ensures that customer projects stay on schedule.
For EDK, Red Hat offers two types of premium support:
Incident support for small workgroups, and Platinum Support for larger development teams. Incident packages provide the customer with priority response on a fixed number of requests. Platinum packages provide priority response on an unlimited number of requests, but are based on the number of software developers using the EDK.
A distribution from Vancouver, Canada.
FHS 2.1 is done!
I'm pleased to announce the release of FHS 2.1, a updated version of the Filesystem Hierarchy Standard for Linux and other Unix-like operating systems. FHS is part of the draft Linux Standard Base specification, which will soon be updated to reflect FHS 2.1.
FHS 2.1 supersedes both FSSTND 1.2 and FHS 2.0. There have been some significant improvements and bug fixes since FHS 2.0. Please see the FHS web site for details. (It has been a few years since the last official release, so check it out if you're using a previous version of FHS or FSSTND.)
What is FHS?
FHS defines a common arrangement of the many files and directories in Unix-like systems (the filesystem hierarchy) that many different developers and groups have agreed to use. See below for details on retrieving the standard.
The FHS specification is used by the implementors of Linux distributions and other Unix-like operating systems, application developers, and open-source writers. In addition, many system administrators and users have found it to be a useful resource.
FHS or its predecessor, FSSTND, is currently implemented by most major Linux distributions, including Debian, Red Hat, Caldera, SuSE, and more.
FHS 2.1 and other FHS-related information is available at http://www.pathname.com/fhs/
Information on the Linux Standard Base is available at http://www.linuxbase.org/
Daniel Quinlan <quinlan at pathname.com>
FHS editor
Linux Standard Base chair
Strictly eBusiness Solutions Expo |
June 7 & 8, 2000 Minneapolis Convention Center Minneapolis, MN Visit www.strictlyebusinessexpo.com
|
USENIX |
June 19-23, 2000 San Diego, CA www.usenix.org/events/usenix2000/
|
LinuxFest |
June 20-24, 2000 Kansas City, KS www.linuxfest.com
|
PC Expo |
June 27-29, 2000 New York, NY www.pcexpo.com
|
LinuxConference |
June 27-28, 2000 Zürich, Switzerland www.linux-conference.ch
|
"Libre" Software Meeting #1 (Rencontres mondiales du logiciels libre), sponsored by ABUL (Linux Users Bordeaux Association) |
July 5-9, 2000 Bordeaux, France French: lsm.abul.org/lsm-fr.html English: lsm.abul.org
|
Summer COMDEX |
July 12-14, 2000 Toronto, Canada www.zdevents.com/comdex
|
* O'Reilly/2000 Open Source Software Convention |
July 17-20, 2000 Monterey, CA conferences.oreilly.com/convention2000.html
|
Ottawa Linux Symposium |
July 19-22, 2000 Ottawa, Canada www.ottawalinuxsymposium.org
|
IEEE Computer Fair 2000 Focus: Open Source Systems |
August 25-26, 2000 Huntsville, Alabama www.ieee-computer-fair.org
|
Atlanta Linux Showcase |
October 10-14, 2000 Atlanta, GA www.linuxshowcase.org
|
Fall COMDEX |
November 13-17, 2000 Las Vegas, NV www.zdevents.com/comdex
|
USENIX Winter - LISA 2000 |
December 3-8, 2000 New Orleans, LA www.usenix.org
|
The E-Commerce Times has a Linux section: http://www.ecommercetimes.com/linux/
Caldera sponsors Linux Professional Institute's (LPI)
exam-based certification program, TurboLinux partners with Computer
Associates for Unicenter, WordPerfect hits the 1-million-download mark.
http://www.ecommercetimes.com/news/articles2000/000517-tc.shtml
One Year Ago: Penguin and Linux Taking Center-Stage.
(An article originally published in May 1999.)
For Linux, 1998 was kind of like the year its voice broke....
http://www.ecommercetimes.com/news/articles2000/000503-tc.shtml
The End of Linux Hysteria? 2000 could be the year that Linux comes fully
into its own....
http://www.ecommercetimes.com/news/articles2000/000509-1.shtml
IBM and Linux: A Test of Metal
http://www.ecommercetimes.com/news/articles2000/000522-1.shtml
NAMPA, Idaho and MOUNTAIN VIEW, Calif., - May 17, 2000 - HostPro, Inc., (www.hostpro.net), a Web hosting subsidiary of Micron Electronics , and Cobalt Networks, Inc., (www.cobalt.com)today announced an alliance to expand HostPro's Web hosting programs by offering dedicated server solutions on Cobalt RaQ 3 server appliances. The arrangement enables HostPro to offer direct sales and support to its dedicated Web hosting customers by using a server appliance platform specifically designed by Cobalt Networks for dedicated hosting.
Orlando, Florida, May 22, 2000 - Cobalt Networks, Inc. today announced Cobalt StaQware, a high availability clustering solution that ensures the uptime of business critical Web sites and applications. StaQware, which runs on Cobalt's RaQ 3i server appliances, offers 99.99 percent availability and requires no customization or modification to applications.
Hello!
I read several of your Linux Gazette issues. Just to let you know- my company sells a line of RAID products that are Linux compatible.
Our address is www.raidweb.com
The advantages of our products are that we sell systems utilizing either SCSI or IDE hard drives. Also, our RAIDs are O/S independent--useful if your readers are utilizing multiple-boot or different operating systems.
Ann Arbor, Michigan, May 8, 2000 - Cybernet Systems Corporation today announced two new product releases with enhanced features for its popular Linux-based NetMAX Internet appliance software line, providing consumers with more capabilities and flexibility at the same low cost and in the same easy, 15-minute installation format. The NetMAX Internet Server Suite now includes the ability to host multiple domains on a single IP address, and improvements to the NetMAX FireWall Suite include a proxy server with 100 MB of cached storage to speed network performance.
Santa Clara, CA -- May 22, 2000 -- Computer I/O Corporation, a provider of communications servers, embedded software and services, announced the Easy I/O (TM) T1/E1 Streaming Server, a high-performance communications server specifically designed for data insertion, capture and analysis applications.
The Linux-based T1/E1 Streaming Server functions as a communictations probe enabling client applications to directly access T1/E1 DS0 channels from the LAN environment.
DENVER-- LinuxMall.com Inc. and EBIZ Enterprises Inc., announced today both parties have executed a letter of intent (LOI) to merge. The merger of LinuxMall.com and TheLinuxStore.com, a division of EBIZ Enterprises, will position the combined entity as the largest vendor-neutral Linux shopping mall and destination on the Internet. The resulting company will offer the most comprehensive selection of Linux products and solutions, information and services. The companies' combined prior fiscal year revenues were more than $25 million.
Under terms of the agreement, the new corporation will be known as LinuxMall.com. Today, LinuxMall.com is the No. 1 e-commerce site for the Linux community and was recently listed the No. 1 shopping destination in Linux Magazine's "Top One Hundred Linux Sites." The rise of the Linux operating system has been one of the top technology stories of the year as companies are adopting this system within their enterprises. TheLinuxStore.com Web site will become a store within the LinuxMall.com collection of online stores.
The new Company intends to apply for NASDAQ listing after successful completion of the proposed merger.
The Software Carpentry Project is pleased to announce the selection of finalists in its first Open Source Design Competition. There were many strong entries, and we would like to thank everyone who took the time to participate.
We would also like to invite everyone who has been involved to contact the teams listed below, and see if there is any way to collaborate in the second round. Many of you had excellent ideas that deserve to be in the final tools, and the more involved you are in discussions over the next two months, the easier it will be for you to take part in the ensuing implementation effort.
The 12 entries that are going forward in the "Configuration", "Build", and "Track" categories are listed at the URL below. The four prize-winning entries in the "Test" category are also listed, we are putting this section of the competition on hold for a couple of months while we try to refine the requirements. You can inspect these entries on-line at http://www.software-carpentry.com/first-round-results.html
From the Big Iron down to a Pocket Server. Stalker Announces a Linux StrongARM version of the CommuniGate Pro Mail Server
MILL VALLEY, CA - May 15, 2000 - Just two weeks after the successful release of the AS/400 version of CommuniGate Pro, Stalker Software, Inc. today announced the Linux StrongARM version of their highly scalable, carrier-grade messaging server.
CommuniGate Pro was initially designed to be a highly portable messaging system that can effectively use the resources of any operating system on any hardware platform. Current installations include small to mid-size ISPs on up to the extra large ISPs and Fortune 500 companies.
With this release, Stalker expands the number of supported Linux architectures: besides the "regular" Intel-based systems, CommuniGate Pro can be deployed on PowerPC, MIPS, Alpha, Sparc, and now StrongARM processors running the Linux(r) operating system.
The highly scalable messaging platform can support 100,000 accounts with an average ISP-type load on a single server, and the CommuniGate Pro unique clustering mechanisms allow it to support a virtually unlimited number of accounts.
For office environments and smaller ISPs, CommuniGate Pro makes an ideal Internet appliance when installed on MIPS-based Cobalt Cubes(r) and, now, Rebel.com's NetWinder(r) mini-servers.
The CommuniGate Pro Free Trial Version is available at http://www.stalker.com/CommuniGatePro/.
First UK Linux Conference Set To Challenge IT In Business SuSE Linux Ltd, Europe's leading Linux distributor, will be hosting the first UK Linux Conference on 1st June at the Olympia Conference Centre in London. The Conference, in association with IBM, is set to position Linux as a viable option for the corporate desktop, whilst preserving its traditional role of powering many corporate servers. Leading industry figures, including Larry Augustin of VA Linux, Alan Cox of Red Hat, Dirk Hohndel of SuSE Linux and Vice President of the Xfree86 Project, and John Hall from Linux International, will discuss issues ranging from the origins and direction of Linux, to the increasing relevance it has in the business environment today.
IRVINE, CA (May 17, 2000) - Magic Software Enterprises announced completion of two key acquisitions. Magic purchased a majority interest in Sintec Call Centers Ltd. (Sintec), a Magic Solutions Partner that is the developer of the leading call center management software in Israel. Magic plans to market and sell the Magic-based solution -- which already has been implemented extensively in Israel -- worldwide under the brandname, "Magic eContacit" Magic also has acquired ITM, another Magic Solutions Partner with expertise in the development and implementation of e-commerce projects.
IRVINE, CA (May 22, 2000) - Magic Software Enterprises (Nasdaq: MGIC), a leading provider of state-of-the-art application development technology and business solutions, announced today that it has signed a deal with Compass Group PLC, a major worldwide foodservice organization, to deliver an e-procurement solution. The e-procurement solution, which is being developed and implemented by Magic's French subsidiary at Compass Group France, will be built using Magic's award-winning business-to-business e-commerce solution, Magic eMerchant. The new application is expected to become operational in June 2000.
"We chose Magic over Oracle and IBM because they were able to provide us a competitive, fixed-price solution that could be implemented much more quickly and efficiently than the other two, and would adhere exactly to our specific data model," said Ludovic Penin, Compass Group's IS director in France.
SANTA CRUZ, Calif. - May 22, 2000 - Lutris Technologies, Inc., an Open Source enterprise software and services company, today announced that its Professional Services group was chosen to deliver the interactive customatix (www.customatix.com) Web site for Solemates. Customatix.com is an interactive E-commerce site that enables customers to design and build their own shoes click-by-click from the sole up.
Solemates, the company behind customatix.com, relied on Lutris Technologies' Professional Services group to develop a site capable of delivering the three billion trillion combinations of custom shoe designs that only a Web-based business could offer customers. Visitors to customatix.com can select from a vast assortment of shoe design elements, including sole heights, materials, colors, laces, and other options to build a uniquely individual pair of shoes.
Lutris made customatix.com come to life quickly. Using Enhydra (www.enhydra.org), a leading Open Source Java(tm)/XML application server, the Professional Services group built a complex, multi-faceted application, architecting a solution that integrates seamlessly with Solemates' partners, including UPS, Cybersource, and FaceTime. The Enhydra Open Source application server decreased Solemates' time-to-market to a fraction of what it could have been using closed source, proprietary software.
Using Enhydra XMLC, Lutris was able to deploy Solemates' business in five months-roughly half the time it would have taken without this innovation, and at a cost of approximately one-third of what a pioneering site typically costs, according to recent GartnerGroup survey data. Enhydra XMLC separates HTML design and coding from business logic, allowing interface designers and Java programmers to work simultaneously yet independently. Since a core benefit of customatix.com's vision lay in allowing customers to view their creations in real-time, Enhydra XMLC provided the precise technology to support such an inventive business strategy.
LinuxMall's Ask Linus forum.
Proton Media specializes in creating multimedia web presentations using Flash 4.0. The same presentation may be used as a trade-show kiosk and also given away as a "CD-ROM business card". (This URL requires Macromedia's Shockwave Flash plug-in. A link to the Linux version is available at the site.)
TheLinuxExperts.com sells Linux servers in North America and installs office LANs.
Firstlinux.com "I've installed Linux: What Next?" is a series of articles aimed at helping you realise the full potential of Linux.
Making the Palm/Linux Connection (O'Reilly article)
Universal Device Networking -- the Future is Here (LinuxDevices.com article)
AnchorDeskUK article about Red Hat's default login mishap
ZDNetUK article: "Linux took another major stride towards corporate acceptance last week, with IBM's announcement that IBM Global Services would support S/390 versions of Linux from SuSE and TurboLinux."
Can Linuxcare stay afloat? (ZDNetUK article) "The real story behind a potential open-source disaster."
Browser Wars: the Future Belongs to the Dinosaurs
Most of these are linked directly or indirectly from the indicated OSOpinion articles.
How to publish a trade secret (Microsoft's Keberos specification)
Microsoft's Gamble of a Lifetime (switching from selling software to on-line subscription services).
Open source is (so far) a road to nowhere
Microsoft - The Penguin's Buddy (some more ways MS is shooting itself in the foot)
Napster and Gnutella
Infoworld article about the "medialess" OS
We would like to announce that the next version (v0.2.20) of BORG (BMRT Ordinary Rendering GUI) is now available for download at www.project-borg.org. BORG is now running on most of the BMRT supported platforms including LINUX, WinNT, SOLARIS. (Requires Java 1.1.7 or higher.)
I would like to announce the availability of AccountiX for LINUX. this is a full featured, modular accounting package. The source code is available in order to provide customization to fit an end-users needs. Information on the package is located at www.accountixinc.com.
Frank Quirk, President
AccountiX, Inc.
With "Heavy Gear II" Loki Entertainment Software is opening the door to new dimensions in the Linux world: with 3D audio effects and joystick support, a further step has been taken towards the acceptance of Linux by the home user.
With the release of the first "big" Linux game, "Civilization: Call To Power" (awarded the "Best End User Product of 1999" by Linux Journal), Loki Entertainment has already made a name for itself. Just like its successors, "Heavy Gear II" makes optimal use of the qualities of Linux in the network, whereby multi-player games based on rounds, or real-time, are possible. Due to its success, it is no surprise that around a dozen more titles are planned on being ported to Linux for the year 2000.
Loki is currently placing its main emphasis on 3D sound support by means of OpenAL. "OpenAL represents a milestone for Linux," realizes Scott Draeker, president of Loki Entertainment Software. "Until now, 3D audio features in games were reserved for users of other platforms. This has all changed now."
OpenAL, entirely in the tradition of the Open Source community, is issued under the LPGL license (GNU Library Public License).
Loki released 7 front-line Linux game titles in 1999, and plans 16 titles for 2000. For more information, visit www.lokigames.com.
We proudly announce the beta release of Linux SDK for use with Quake III Arena.
The full version of Linux SDK will benefit Linux enthusiasts and aspiring game developers alike by allowing them to create maps and game code modifications under Linux. Windows users have had this capability since the release of the original Quake game.
Linux SDK offers Linux users a toolchain for content creation. It combines software for image processing, conversion and editing with a fully-featured map editor compatible with the Quake III engine. The features include custom texturing, lighting, patches, shaders, entities and more. It is based in part on the QERadiant code from id Software, Inc.
Download the unsupported beta version.
Public demo of Kylix, Borland's Delphi for Linux.
Aestiva HTML/OS is a simple way to build a database designed for the web.
CRiSP is a programmer's editor 7.0 including file compare, FTP client, GUI and text modes, vi/emacs emulation, and much more. (21-day evaluation copy)
Cybozu Office 3 is an English version of Cybozu's Japanese office suite. Includes ten applications. Dowload the 60-day trial at http://cybozu.com/download/index.html
Canvas 7 Linux Beta 2 by Deneba provides vector drawing, diagramming, technical illustration, creative drawing, image editing, web graphics and page layout features in one powerful application. Download the beta from Deneba's web site.
MontaVista real-time scheduler for the Linux kernel. (For embedded applications.) Download source and documentation at http://www.mvista.com/realtime/rtsched.html
EiconCard Connections for Linux, when combined with an EiconCard network interface card, provides the wide area communications needs for an easy-to-use, low-cost, and easy-to-manage communications server. The flexibility of the EiconCard, when combined with this software, provides powerful IP Routing over various WAN protocols, making it ideal for applications such as Web Servers or Thin Server Appliances. In addition, many Linux-based embedded systems, such as point-of-sales, can use the X.25 connectivity built into the software. It will be available in June.
Opera has signed a deal with RSA to use RASFE Crypto-Ci 1.0 encryption software in its Opera web browser.
I had a great time this weekend at an annual science fiction conference named Baycon. Heather and I were staff in their first terminal room, sponsored by Red Hat, LinuxCare, and VA Linux Systems and it was a rousing success. Other SF conventions are looking forward to doing the same.
Good news: Heather, my wife and principle editor, will be taking over the Answer Blurb. She's refined her 'lgazmail' PERL script to the point where she can take up the slack and has graciously agreed to take over responsibility for the monthly blurb as well.
Long time readers may recall that early Answer Guy columns had no blurbs. They also had no HTML formatting. The URLs weren't even wrapped in links! I'd been frustrated by this for some time --- from about the time that I realized that Marjorie (then the editor of LG) was publishing my responses as a column, (and that she had dubbed me with as "The Answer Guy" --- a title that still makes me nervous).
Heather agreed to step of to the plate and do the formatting. She tried a few mail-to-web utilities like MHOnArc, and John Callendar's Babymail, etc. Then she decided to derive her own from a Babymail source. So her script reads "Jim's e-mail markup hints" and converts it to reasonable HTML.
Heather also designed and drew the new Answer Guy Wizard (TM?) with its distinctive Question Crystal Ball and Answer Speak Bubble --- which visually refer to the question and answer speak bubbles throughout the column. (She's also added the pilcrow bubble for editorial comments).
In other words, Heather went way beyond just "wrapping the URLs in links" and completely overhauled the visual look of our column.
I should also note that Heather is no slouch technically. She has often helped me find answers to questions --- including the answers that I've published here.
When we did that overhaul I also decided to add the "blurbs" The idea was to say things of interest that were not in response to any questions. (I suppose I could've use a shill to jury rig the desired questions, but that would be cheating!).
The blurb has sometimes been editorial (commenting on the Mindcraft/Microsoft fiasco and the wonderful Linux community anti-FUD response). Sometimes it's been a summary and commentary on the sorts of questions we got in the previous month, feedback that we got from my answers, and any trends that we were seeing, etc.
For awhile I tried to identify a specific person and forum every month --- to recognize them with the "Answer Guy's Support Award." I wanted to point out where other individuals were providing lots of technical support in more specific forms of support in various fora. For instance in May I wanted to recognize Ben Collins of the Debian-SPARC mailing list. He seems to respond to most of the questions that show up there. (Unfortunately I was too much of a flake to keep that up for long. It's hard to dig up a really good new selection every month).
Of course there have also been the two April Fool's hoax blurbs and a few others that weren't really there.
The sad fact is that I don't have enough time to conceive and compose articles for this column every month. It is much easier for me to answer questions (react) than to write from scratch. (I tend to digress enough when there IS a question at hand. I'm a regular attention deficit train wreck when left to my own devices!).
Let me reassure everyone that I'm not leaving the "Answer Guy" column. I'm somewhat compulsive about answering technical questions, and I used to make a hobby out of USENet netnews before the advent of LG ensured that I get a 100 or so diverse Linux questions every month in my inbox. I sometimes still make it out to USENet --- though I dropped the uucp netnews feed that used to fill the disk on antares on a semi-regular basis! (Now I just telnet out to a shell account at my ISP, or use my $NNTPSERVER environment setting to get to his news server).
I'll also probably still insert a few comments to supplement Heather's.
Hi everybody. I suppose I don't have to introduce myself now. I will also be taking on some deeper organizational features -- in the next few months we'll see a revamp of how Tips, the Mailbag, and Answer Guy messages are formatted -- though I think they won't look all that different.
Also, we'll have more Wizards joining us. Jim had from the early days conveived of this as The Answer Gang -- he was just helping an editor with a few technical notes, a role which anyone can play. The Mailbag and Tips is popular and more gurus are around, now. If you'd like to join The Answer Gang as a regular, let us know what your specialties are.
I'll have something more "Blurb"ish next month. On to the answers!
From Tom on Fri, 05 May 2000
Hi Jim (or James? Is Jim short for James?)
Jim is short of James. I tend to go by Jim.
First let me thank you for the work you're doing in the LG. I've read it for about 2 years now and have seen lots of tips. Even the AnswerGuy section is interesting, sometimes amusing... But let me come to the point now
I have Suse Linux 6.3, Kernel 2.2.13, with NCR SCSI and 2 disks. With fdisk I set Boot=Y on /dev/sda1.
mtab looks like:
/dev/sda1 /boot /dev/sda2 / /dev/sdb1 /home
But mtab will be processed after LILO has loaded the kernel, right?
/etc/mtab is the file which contains a list of currently mounted filesystems. /etc/fstab is the list of filesystems which are "known" to the system. /proc/mounts is a virtual file, it is a dynamic representation of the kernel's mount table.
/etc/mtab might be out of sync with you /proc/mounts in cases where the system is in single user mode --- and the root filesystem is mounted read-only, or under other add circumstances. /proc might not be mounted in some other cases. The structure of the two files is similar, but not quite identical. I've experimented with making /etc/mtab a symlink to /proc/mounts (and adjusting a few startup scripts to cope with that). It seems to work.
The main commands that use /etc/mtab are the 'mount' command (when used with no arguments, to display the list of currently mounted filesystems) and the 'df' command (which displays the currently available free space on each mounted fs). Personally I think these (and any others that need this info) should be adjusted to use /proc/mounts in preference to /etc/mtab --- since this would be one step that might allow us to mount / in read-only mode.
Of course that should be abstracted through a library and it should still be able to use /etc/mtab for cases where /proc isn't available (particularly on some sorts of embedded systems).
But I digress.
lilo.conf looks like:
initrd = /boot/initrd # exists boot = /dev/sda # put the Bootstrap code here #-#-#-#-# image = /boot/vmlinuz # exists root = /dev/sda2 # the device holding / label = lx # short but unique :-)
When running lilo, it shows
Addes lx *
When rebooting the system, it hangs after printing LI. I've read the lilo-README. It says that this is caused by "geometry mismatch" or having moved "/boot/boot.b without running the map installer."
Uuuuh?!? What's the problem? I just don't get it ... Please help me. - Thank you!
Tom
Greez from Switzerland!
Try adding the "linear" directive to the global section of your /etc/lilo.conf. That would be the part before the first "image=" directive.
Try running /sbin/lilo -v -v (should give more verbose output).
From Tom on Mon, 08 May 2000
Hello Jim
Thank you for your quick response!
Try adding the "linear" directive to the global section of your /etc/lilo.conf. That would be the part before the first "image=" directive.
I've done that and ... it works! Why does it? Is there a general problem with SCSI-drive(r)s and the old style adressing C/H/S? AFAIK "linear" means that the sectors on a disk are counted from 0 to n, as the SCSI does itself on block devices. But now I'm digressing
Thanks again! Tom
The failure mode you described (the LILO loader stops at just LI) is described in their documentation ("tech.dvi" or "tech.ps" depending on your distribution/source).
Basically the boot loader prints the letters LILO one at a time, and each at a specific point in its boot process. This is useful for debugging and troubleshooting. LI says the the first stage boot loader completed, and the second stage boot loader was found, but the mapping data (used to find the kernels, etc) was not. This is usually due to a problem where the BIOS and the LILO loader are using incompatible blocking addressing modes. (One is using CHS --- cylinder/head/sector --- while the other is using LBA/linear).
Sometimes SCSI expect linear addressing, some SCSI controllers or controller/drive combinations emulate the old WD1003 (ST506) interface closely enough that CHS addresses will do.
Sometimes you need to switch your CMOS/BIOS to use UDMA/LBA modes and/or add the "linear" to your /etc/lilo.conf --- sometimes you need to just take the "linear" directive out of /etc/lilo.conf (and re-run /sbin/lilo, of course).
From The Phantom on Mon, 01 May 2000
Hello,
I'm wondering if you can answer a few questions on the UNIX rm command. I need a response before May 3rd if possible. Your assistance on this matter is greatly appreciated. Thank you for your time and service. Here's the questions
Hmm. Wouldn't want this assignment to be late for the prof, heh?
Well, at least you had the brights to use a hotmail account rather than sending this from your flunkme@someuniv.edu address.
The rm unix command lowers the link of an inode. When the link count goes to zero the inode is made available to the system and cleared of extraneous information.
The 'rm' command is basically a parser and wrapper around the unlink() system call.
BTW: This definition is an oversimplification. When the link count is less than 1 AND THERE ARE NO OPEN FILE DESCRIPTORS ON THAT FILE then the system does some sort of maintenance on the inode and any data blocks that were assigned to it.
Exactly what the filesystem does depends on what type of fs it is, and on how it was implemented for that version of that flavor of UNIX.
Usually the inode is marked as "available" in some way --- so that it can be re-used for new files. Usually the data blocks are added to a free list, so that they can be allocated to other files.
(It is possible for some implementations to mark and reserve these to allow for some sort of "undelete" process --- and it would certainly be possible to have "purge" and "salvage" features for some versions of UNIX).
1) Explain link count?
The link count is one of the elements (fields) of the inode structure. An inode is a data structure that is used to manage most of the metadata for a file on a UNIX like filesystem.
On UNIX filesystems a directory entry is (usually) a link to an inode. (On some forms of UNIX, on some types of filesystems there may be exceptions to this. Some filesystems can store symbolic link data directly in their directory structures without dereferencing that through an inode; some of them can even store the contents of small files there. However --- in most cases the directory entry is a link to an inode.
This allows one to have multiple links to a file. In other words you can have many different names for a file --- and you can have identical names in different directories.
It turns out that most filesystems use this feature extensively to support the directory structure. Directories are just inodes that are mostly just like files. Somewhere you have a parent directory. It contains a link to you. Each of your subdirectories contains a ".." link to its parent (you). Thus each directory must contain a link count that is equal to it's number of sudirectories plus two (one for . and another for ../somelink.to.me).
(Note: On most modern forms of UNIX there is a prohibition against creating additional named hard links to directories -- this is apparently enforced in order to make things easier for fsck).
2) Explain why the name of the command is called remove (rm)?
It seems pretty self explanatory to me. You're removing a link. If that link is the last one to that file, then you've remove the file as well.
3) What hapens to the blocks referenced by the inode when the link count goes to zero?
Normally the data block would be returned to the free list. The free list is another data structure on UNIX filesystems. I think it is usually implemented as a bitmap.
Note: On some forms of UNIX the filesystem driver might implement a secure deleted feature which might implement arbitrarily complex sets of overwritting the data with NULs, with random data, etc. There is a special feature in Linux which is reserved for this use -- but which is not yet implemented. You might find similar features in your favorite form of UNIX.
4) What data is present in these blocks after the inode has been cleared?
That depends on the filesystem implementation. It usually would still contain whatever data was laying around in those blocks at the time that they were freed.
If you're thinking: "Ooooh! That means I can peek at other people's data after they remove it!" Think again. Any decent UNIX implementation will ensure that those data blocks are clear (zero'd out) as they are re-allocated.
5) How does the removal of an inode which is a symbolic link change the answer to 3) and 4)?
Symbolic links may be implemented by storing the "data" in the directory entry. In which case the unlink() simply zeros out that directory entry in whatever way is appropriate to the filesystem on which it is found.
Symbolic links may also be implemented by reference to an inode --- and by storing the target filename in the data blocks that are assigned to that inode. In which case they are treated just like any other file.
Note that removing a symbolic link with 'rm' should NEVER affect the target file links or inodes. The symbolic link is completely independent of the hard links to which they point and the inodes to which those refer.
Thank you for your help.
As I'm sure you noticed this sounds to me like a "do my homework" message. However, I've decided to answer it since it is likely to be of interest to many of my readers.
You may also have noticed that I was a bit vague on a number of points. Keep in mind that there is quite alot of this that depends on which version of UNIX you're using, which filesystem your talking about (Linux, for example supports over a dozen different types of local filesystem), and how you've configured it.
Of course you could learn quite a bit more about it by reading the sources to a Linux or FreeBSD kernel
From kd on Mon, 01 May 2000
I recently installed Suse Linux on a machine to be a server, but I cannot telnet to the linux server from my other machines. can you help?
~kelly
Short answer: Probably TCP Wrappers and the old "double reverse lookup problem." Try adding an entry in /etc/hosts to refer back to your client(s) and make sure that your /etc/nsswitch.conf and /etc/hosts.conf are configured to honor "files" over DNS and NIS.
You could have been a bit more vague. You could have left out the word "telnet" ?
When asking people technical support questions you have to ask:
How many possible causes are there to this problem? How many of them have I eliminated? How have I eliminated them? Can I eliminate some more? What is the error message I'm getting (if any)? What was I expecting? What happened that didn't match that/those expection(s)?
For example: Can you ping the server from the client system? (That eliminates many IP addressing, routing, firewall and packet filtering questions). Can you telnet from that client to any other server? (That eliminates most of the questions that relate to proper client software/system configuration and function). Can I access any other service on this client? (Web server, file or print services, etc.)
Then you ask: What did I expect to happen when I telnetted to that system? I'd expect to get a set of responses something like:
Trying 123.45.67.89 Connected to myserver.mydomain.not Escape character is '^]'. Debian GNU/Linux 2.2 myserver.mydomain.not myserver login:
So, what did you get. Did you see the "Trying" line? That would mean that the telnet DNS or other hostname lookup returned something. Did the IP address in the trying line match that of your new server? That would mean that your DNS is correct! Did you get the "connected to" line? That suggests that the routing is correct. Did it just sit there for a long time? How long? What if you wait for five or ten minutes? Does it eventually connect?
It sounds like you have the old "double reverse DNS" problem. You are probably using DNS and you probably don't have proper reverse DNS (PTR) records for you client system(s). Do a search in the Linux Gazette archives for several discussions on this.
When you are getting free software and free support, it's important to do your homework. I typically will put about 10 hours into trying to solve a problem before I'll write up a question to post to the newsgroups, mailing lists, authors/maintainers, etc.
Of course I can understand part of the problem you might be facing. It sounds like you have little or no Linux experience, or at least little or no experience in setting up Linux networking.
You probaby don't know all of the elements that go into "telnetting into your server." Here's the basic rundown:
You have to have a client (telnet command). That has to be on a system with TCP/IP installed, configured and working. It must have an IP address and a route to your server.
You have to have a server (in.telnetd). It would normally be launched on demand by a dispatch program (inetd) which would be reading configuration out of a configuration file (/etc/inetd.conf).
On Linux systems the /etc/inetd.conf is usually configured to run most programs under an access control and logging utility called "TCP Wrappers" (/usr/sbin/tcpd). That utility refers to a couple of configuration files (/etc/hosts.allow, and /etc/hosts.deny) and it does some "paranoid" consistency checking to try and ensure that the client "is who he claims to be." The specifics of this paranoid checking are referred to as a "double reverse DNS lookup."
This requires that the client system's IP address somehow be registered in some sort of naming service that the server is configured to query. The easiest of these in most cases is to simply add the appropriate IP address (and some arbitrary name) int the /etc/hosts file. A better way is to add an appropriate PTR record to your DNS zone.
Linux uses a modular name services resolution system. Newer versions of Linux use the /etc/nsswitch.conf files to control the list of name services that are used for each name space (users/accounts, groups, hosts and networks, services, mail aliases, file server maps, etc). In most cases you wouldn't have to modify the nsswitch.conf to make it look at the /etc/hosts file. In other cases you might.
In previous months I've gone into greater detail about how to troubleshoot problems in accessing TCP services on Linux systems. Look for references to tcpdump and strace to find out more.
(Summary: You can replace the entry in /etc/inetd.conf with a wrapper script that runs 'strace' on the program, thus logging what the program is trying to do in great detail. You can also run 'tcpdump' on any machine on the local LAN segment, seeing the traffic between your client and server in great detail).
Unfortunately these tools are rather advanced, very powerful and coresponding difficult to use effectively. (You can probably get the information from them pretty easily -- the problem is to configure them to provide just the info you need and in parsing and understanding what they tell you).
Hopefully I've guessed correctly on what you problem is. Otherwise search through my back issues and the FAQ and do lots of troubleshooting. Ask a more detailed question.
From Milton bradley on Tue, 02 May 2000
Hello,
Don't really know if you'll answer my questions but it doesn't hurt to give it a try. If you can all I can say is thanks. Well here goes
the situation is this
You and your friends have decided that e-mail is the easiest way to get your homework done for you?
[I got another question from a different address at Hotmail yesterday. It had a similarly "Do my homework for me" tone to it.]
Directory tress can include large numbers of files. Referencing a file by full path name can be burdensome on the user. Consequently in UNIX there is an environment variable $PATH (e.g. .:/bin:/usr/bin) which directs the system for the directories it is to search for an executable file. All non-executable files are looked for only in current working directory(.).
Actually this set of propositions is full of minor inaccuracies. First the $PATH environment variable is not a feature of UNIX per se. It is not unique to UNIX, and it is not necessitated by UNIX. However it is a widely used convention --- and it's probably required by POSIX in the implementation of shells and possibly some standard libraries.
Non-executable files are found according to the semantics of the program doing the opening. Usually this is a path (either an absolute path from the root directly or one that is relative to the current working directory or $CWD).
The main flaw in your propositions is that the PATH exists primarily for convenience. There is actually a more important reason for things to use the PATH.
questions are
1) Why shouldn't other non-executable file be referenced by this mechanism?
Why should they.
2) SuperUsers are cautioned that the shell should not look in the current working directory first (e.g. /bin:/usr/bin:.) for security reasons. Why?
All users are cautioned that adding . (CWD, the current working directory) to their PATH carries some risk.
Let's say that you put . on your path. If you put it at the beginning of your path you've implemented a policy that any executable in the current directly takes precedence over any other executables by that name. So I'm an evil user and I just create a program name 'ls' which does "bad things(TM)"
(I'll leave the exact nature of "bad things(TM)" to your imagination).
When 'root' or any other user then does a 'cd' into my directory and types 'ls' (a very common situation) then my program runs in their security context. I effectively can do anything that they could do. I can access any file they can access. I can completely subvert their account.
Doh!
So let's put that . at the end of the PATH. That's solve the problem. Now the /bin/ls or /usr/bin/ls will be executed in preference to my copy of 'ls.'
So now the user "evil" has to get more clever. He makes a number of useful links to his "bad things(TM)" script. These are carefully crafted strings like: "sl" and "ls-al" (common typos that the hurried user might make make while visiting my directory).
Quod erat demonstratum.
3) The c-shell creates a hash table of the files in $PATH on start-up. Give one advantage of this scheme:
The hash tables is basically an index of all executables on the path. Thus one can find, in O(logN) time if an executable exists and where it is. (Look up "theta notation" in any text book on "computational complexity analysis to understand that "big Oh" notation).
4) Give one disadvantage of the above mentioned scheme:
I'll give two.
- The shell will need to malloc more memory than a non-hash version would require. It needs to build the hash table and keep it in core. Moreover this data is not shareable memory --- it is private to each instance of the shell.
- The hash table may get out of sync with the real list of executables on the disk. Some additional binaries may be added and the shell has no way of detecting it. (Shells that support PATH hashing generally also offer some command to update their hash table --- 'rehash' and 'hash -r' are common).
5) Since the system can easily maintain a list of files referenced in teh course of a login session, one could also maintain a REFERENCE FILE TABLE and use it as part of a scheme to locate files. Give one advantage of this scheme:
Hmm. MU!
Which "one" could do this? Would this be a new API? What programs would support it? How?
Ergo I unask your question.
6) Give one disadvantage of this scheme:
Commands with the same name are presumed to provide compatible semantics. Ambiguity among data files is likely to have severe consequence.
One could use expressions like `locate foo` in each case where one wished to refer to "the first file named 'foo' on my data search path." One could certainly implement an API that took filenames, perhaps of the form: ././foo and resolved them via a search mechanism.
(Note: GNU systems, such as Linux, often have the "updatedb" or "slocate" packages installed. These provide a hashed index of all files on the system which are linked through publicly readable directories. Thus the `locate` command expression could be used already --- though the user wouldn't be able to implement a policy over how many and in which order the file names were returned. It would a simple matter of programming to write one's own shell function or script which read a DPATH environment variable, called the 'locate' command and search the return list for matches in a preferential order).
BTW: Some shells implement a CDPATH environment setting.
Here's an excerpt from the 'bash' man page:
CDPATH The search path for the cd command. This is a colon-separated list of directories in which the shell looks for destination directories specified by the cd command. A sample value is ".:~:/usr".
As I see it the man reason for UNIX to implement support for executable search PATH is to allow scripts to be more portable, while allowing users and administrators to implement their own polices and preferences among multiple versions of executables by the same name.
Thus when I use 'awk' or 'sed' in a script I don't care which 'awk' or 'sed' it is and where this particular version of UNIX keeps its version of these utilities. All I care about is that these utilities provide the same semantics as the rest of my scripts and commands require.
If I find that the system default 'awk' or 'sed' is deficient in some way (and if I'm a "mere mortal user") I can still serve my needs by installing a personal copy of a better 'awk' (gawk or mawk) and/or a better 'sed' (such as the GNU version). PATHs are the easiest way to accomplish this.
So, the disadvantage of implement some sort of "data path" feature into the UNIX shells and libraries would basically be:
IT'S A STUPID IDEA!
From Walter Ribeiro de Oliveira Jr. on Tue, 02 May 2000
I read a question about not being able to use telnet to connect to a linux box... you complained about very few information, I agree with you, but I have a suggestion: isn't the problem about trying to make a telnet as the root user, and in the file /etc/securetty the remote terminals not permiting so ? I mean, for make a telnet as the root user, you need to edit /etc/securetty to allow it... Hugs, see ya
Of course that is a different possibility. However, editing /etc/securetty is a very bad way to do this. You'd have to add all of the possible psuedo-tty device nodes to that list --- which would be long and pretty silly.
If one really insists on thwarting the system policy of prevent direct root logins via telnet, then it's best to do so by editing the /etc/pam.d/login configuration file to comment out the "requisite pam_securetty.so" directive:
# Disallows root logins except on tty's listed in /etc/securetty # (Replaces the `CONSOLE' setting from login.defs) auth requisite pam_securetty.so
... assuming that you are using a PAM based authentication suite -- as most new Linux distributions do. As noted in the excerpted comments from my .../pam.d/login file (as installed by Debian/Potato) there is an applicable setting in /etc/login.defs if you're using JF Haugh's old shadow suite without PAM.
Better Answer: use 'ssh'!
From John K. Straughan on Tue, 02 May 2000
I have a question stuck in my head which is keeping me up at night! What was the name of the very first GUI program that the original AOL software was based upon? This would have been around 1987,1988,1989. It was prior to MS Windows. AOL wasn't the only company to use it. It never really evolved, but there were some applications written for it. I'm thinking it was GEO something, or something GEO. It was for PC, MS-DOS systems. Please help so I can sleep!!! Thanks - John
(Note: it's a bad idea to include HTML attachment/copies of your e-mail to most people. I'd suggest doing that only in cases where you know that the correspondent involved prefers that particular format).
I don't know what package you're thinking of. As far as I remember the original AOL client software was purely for Apple Macintosh systems.
However, it sounds like you're talking about some version that might have run on GeoWorks Ensemble. GeoWorks Ensemble was actually a predecessor of MS Windows --- but it did run on 8086 (XT class) computers on which MS Windows was never supported. If I recall correctly GeoWorks orginally released GEOS, an operating system and graphical environment for the Commodore 64?
Geoworks Inc. has gone on to focus on things like cell phones and WAP. There as a /. (http://www.slashdot.org) thread about their recent attempts to use U.S. and Japanese patents which may stifle the deployment of free WAP and WML packages.
Meanwhile the desktop software that was part of Geoworks Ensemble appears to have been licensed out or spun off to a company called "New Deal Inc." (http://www.newdealinc.com). The specifically mention compatibility with Linux DOSEMU on their web site. This might make an interesing application suite --- though a native version for Linux would be nicer.
There was also the GEM graphical environment by Digital Research. This was the GUI on which Ventura Publisher was originally based. I think that GEM was basically a clone of the Xerox PARC look-and-feel --- very similar in appear and behavior to the Xerox 820 and to the original Macintosh finder software.
Since DR was eventually sold to Caldera by Novell, and spun off again as "Caldera Thin Clients." Meanwhile GEM was released under the GPL and it seems that the canonical site for ongoing GEM development on the net would be at: http://www.deltasoft.com/news.htm
Hope that helps.
From Charles Hethcoat on Thu, 04 May 2000
I was excited to find out about Win4Lin and went straight to their web page for more details. There I read that they only work with Windows 95 and 98. They explain why in their white paper, and the reasoning is, well, reasonable. But I don't think I will be able to use Win4Lin where I work. Here's why.
My company sees to it that my computer runs NT. This was done because NT is far more stable than 9X. Not perfect, but pretty stable. But I would much prefer to use Linux, and I do have Debian installed on my computer. I boot it via a boot disk, and don't fool with lilo.
Since I don't have Windows 95, I can't use Win4Lin. Pity. I could make good use of it. I wonder how many other people are in my position?
Charles Hethcoat
Try VMware (http://www.vmware.com) instead. It does run a full hardware system emulation and can run NT. It can even run a copy of Linux under Linux or Linux under NT (though that seems like a horrible waste).
You might also watch the free virtual machine project (which is not ready for production time) called Plex86 (at http://www.freemware.org). That's based on the work of Kevin Lawton (Bochs) and is apparently now sponsored by Mandrakesoft (http://www.linux-mandrake.com/en).
Of course there's still WINE (http://www.winehq.com). That will run some of your MS Windows applications natively under Linux. There's also still the opportunity to access some of your applications remotely through VNC. You'd run the VNC server on one of your (other) NT systems and access it via the Linux native VNC client (or the Java client, if you really wanted to).
From J.Keo Power on Fri, 05 May 2000
Hi Jim,
My name is Keo. I have been trying to write a script that provides a salutation to the user, though that is different depending on who logs in. There are only three of us logging in on the system, and I want to have a little fun with them by putting in some cool messages.
So far, I have attempted to write a script in vi named intro and placing the file in my home directory. I have "chmod to ugo+x intro". Then going to the /etc/bashrc file and putting in the path of the executable intro file in my home directory.
The bashrc is trying to run the executable, but is returning the message "unary command expected". I am not sure what that means!
If you could give me a little guidance on if my methodology is correct as far as the files I am manipulating, and possibly an outline of the script to write. here is what I have attempted (last time):
#! intro # test of login script name=$LOGIN if [ $name = keo ] then echo "Whats up mano?" else if [ $name = dan ] then echo "Lifes a peach, homeboy." else if [ $name = $other ] then exit fi fi fi exit
Thanks for any help. Keo
I've been trying to clean out my inbox of the 500 messages that have been sitting unanswer and unsorted for months.
This is one of them that I just couldn't pass up.
First problem with this script is right at the first line. That should be a "she-bang" line --- like:
#!/bin/sh
... which is normally found at the beginning of all scripts.
The "she-bang" line is sometimes called "hash bang" -- so-called because the "#" is called a "hash" in some parts, and the "!" is often called a "bang" among hackers, it's also short for "shell-bang" according to some. It looks like a comment line --- but it is used by the system to determine where to find an interpreter that can handle the text of any script. Thus you might see 'awk' programs start with a line like:
#!/usr/bin/gawk -f
... or PERL programs with a she-bang like:
#!/usr/local/bin/perl
... and programs written using the 'expect' language (a derivative of TCL) would naturally start with something like:
#!/usr/local/bin/expect -f
After you fix that here are some other comments that I've inserted into your code (my comments start with the ## -- double hash):
#! intro # test of login script name=$LOGIN ## should be quoted: name="$LOGIN" in case $LOGIN had ## any embedded whitespace. It shouldn't, but your scripts ## will be more robust if you code them to accept the most ## likely forms of bad input. ## Also I don't think $LOGIN is defined on most forms of ## UNIX. I know of $LOGNAME and $USER, but no $LOGIN ## Finally, why assign this to some local shell variable? ## Why not just use the environment variable directly ## since you're not modifying it? if [ $name = keo ] then echo "Whats up mano?" ## That can be replaced with: ## [ "$name" = "keo" ] && echo "What's up mano?" ## Note the quotations, and the use of the && conditional ## execution operator else ## don't need an else, just let this test drop through to here ## (the else would be useful for cases where the tests were expensive ## or they had "side effects." if [ $name = dan ] then echo "Lifes a peach, homeboy." ## [ "$name" = "dan" ] && echo "Lifes a peach, homeboy." else if [ $name = $other ] then exit fi fi fi exit ## $other is undefined. Thus the [ ('test') here will give ## you a complaint. If it was written as: [ "$name" = "$other" ] ## then the null expansion of the $other (undefined) variable ## would not be a problem for the 'test' command. The argument ## would be there, albeit empty. Otherwise the = operation ## to the 'test' command will not have its requisite TWO operands. ## All eight of these trailing lines are useless. You can just ## drop out of the nested tests with just the two 'fi' delimiters ## (technically 'fi' is not a command, it's a delimiter).
Here's a more effective version of the script:
#!/bin/sh case "$LOGNAME" in jon) echo "Whats up mano?" ;; dan) echo "Lifes a peach, homeboy." *) # Do whatever here for any other cases ;; esac
This is pretty flexible. You can easily extend it for additional cases by insering new "foo)" clauses with their own ";;" terminators. It also allows you to use shell globbing and some other pattern matching like:
#!/bin/sh case "$LOGNAME" in jon|mary) echo "Whats up mano?" ;; dan) echo "Lifes a peach, homeboy." b*) echo "You bad, man!" ;; esac
Note that this sees "Jon" or "Mary" in the first clause, Dan in the second and anyone whose login name starts with a "b" in the last case.
Any good book on shell scripting will help you with this.
From Bubba44hos on Fri, 05 May 2000
I have a question and I can't seem to find the answer anywhere. My question is "sttep through a p[rogram being loaded into the system". If you could help, that would be great. Thank you for your time, Brian
Argh!
What does this mean? First "step through a program being loaded into the system" is not a question; it's a directive.
Does this (instructor?) want you to explain how to "single step" through a program (using a debugger like 'gdb')? Does he or she want you to explain the process of how programs get "loaded into" (installed and configured) a system? Does he or she want to hear about how programs are loaded (executed) by a shell?
Anyway those are all interesting and vastly different topics. None of them have simple answers since they depend a lot on what sort of system, who is doing the "loading," and and what sort of "program" we are talking about.
From Mark Hugo on Fri, 05 May 2000
Jim:
I hope this doesn't make me sound too ignorent. Is it possible to
get a Unix system (Not a Linux) on a PC?
I have a potential job opportunity if I have some Unix "experience". Is there a simulator available for a PC? Are Linux and Unix similar enough to learn on RedHat Linux? Or are they too different?
Mark Hugo, Mpls, MN
Linux is the best UNIX "simulator" for the PC.
Linux is similar enough to other forms of UNIX for over 90% of the work you would do as a sysadmin and over 80% of what you'd be doing in the course of normal (applications level) programming.
You can also get a variety of other forms of UNIX for the PC: FreeBSD (http://www.freebsd.org) and its ilk (NetBSD http://www.netbsd.org, BSDI/OS http://www.bsdi.com, and OpenBSD http://www.openbsd.com), Solaris/x86 (http://www.sunsoft.com) and SCO "OpenDesktop" and Unixware (http://www.sco.com).
Most of these are free. All have versions that are "free for personal use."
BTW: The fact that your experience is limited to PCs is more likely to be a problem than the fact that you only have Linux experience. PCisms are worse in many regards then the differences between PCs and other forms of UNIX.
Also note that Linux is not just for PCs anymore. There are versions that run on Alpha, PowerPC (Macintosh and other), SPARC and other platforms.
From Mark Chitty on Mon, 08 May 2000
Hi Jim,
Thanks for the solution. I had gone down a different path but this has cleared up that little conundrum !! I seems obvious now that you point it out.
Your reply is much appreciated.
Oh yes, if you ever write a book let me know. I'll buy it !!
cheers, mark
Actually I have written one book --- it's _Linux_System_Administration_ (New Riders Publishing, 1999, with M Carling and Stephen Degler). (http://www.linuxsa.com).
However, I might have to write a new one on shell scripting. Oddly enough it seems to be a topic of growing interest despite the ubiquity of PERL, Python, and many other scripting languages.
In fact, one thing I'd love to do is learn enough Python to write a book that covers all three (comparatively). Python seems to be a very good language for learning/teaching programming. I've heard several people refer to Python as "executable psuedo-code."
Despite the availability of other scripting languages, the basic shell, AWK, and related tools are compelling. They are what we use when we work at the command line. Often enough we just want our scripts to "do what we would do manually" --- and then to add just a bit of logic and error checking around that.
Extra tidbit:
I recently found a quirky difference between Korn shell ('93) and bash. Consider the following:
echo foo | read bar; echo $bar
... whenever you see a "|" operator in a shell command sequence you should understand that there is implicitly a subshell (new process) that is created (forked) on one side of it or the other.
Of course other processes (including subshells) cannot affect the values of your shell variables. So the sequence above consists of three commands (echo the string "foo", read something and assign it to a shell variable named "bar", and echo the value of (read the $ dereferencing operator as "the value of") the shell named "bar"). It consists of two processes. One on one side of the pipe, and the other on the other side of the pipe. At the semicolon the shell waits for the completion of any programs and commands that precede it, and then continues with a new command sequence in the current shell.
The question becomes whether the subshell was created on the left or the right of the | in this command. In bash it is clearly created on the right. The 'read' command executes in a subshell. That then exits (thus "forgetting" its variable and environment heaps). Thus $bar is unaffected after the semicolon.
In ksh '93 and in zsh the subshell seems to be created to the left of the pipe. The 'read' command is executed in the current shell and thus the local value of "bar" is affected. Then the subsequent access to that shell variable does reflect the new value.
As far as I know the POSIX spec is silent on this point. It may even be that ksh '93 and zsh are in violation of the spec. If so, the spec is wrong!
It is very useful to be able to parse a set of command outputs into a local list of shell variables. Note that for a single variable this is easy:
bar=$(echo foo)
or:
bar=`echo foo`
... are equivalent expressions and they work just fine.
However, when we want to read the outputs into several values, and especially when we want to do so using the IFS environment value to parse these values then we have to resort of inordinate amounts of fussing in bash while ksh '93 and newer versions of zsh allow us to do something like:
grep ^joe /etc/passwd | IFS=":" read login pw uid gid gecos home sh
(Note the form: 'VAR=val cmd' as shown here is also a bit obscure but handy. The value of VAR is only affected for the duration of the following command --- thus saving us the trouble of saving the old IFS value, executing our 'read' command and restoring the IFS).
BTW: If you do need to save/restore something like IFS you must using proper quoting. For example:
OLDIFS="$IFS"
# MUST have double/soft quotes here!
IFS=:,
# do stuff parsing words on colons and commas
IFS="$OLDIFS"
# MUST also have double/soft quotes here!
Anyway, I would like to do some more teaching in the field of shell scripting. I also plan to get as good with C and Python as I currently am with 'sh'. That'll take at least another year or so, and a lot more practice!
From Hank on Wed, 10 May 2000
I understand that under Linux you can set the home directories to a certin size. Either I am not looking in the right place or for the right thing, but I can't seem to find any info on this. I run Mandrake v7.0, and I am just trying to learn about Linux as best I can. I love the Linux on a floppy distributions, I can show everyone I know how well Linux runs now.
Thanks for your help, Hank
It depends on what you mean by "set ... to a certain size."
First you home directories under Linux, or any form of UNIX can be any normal directory tree. Normally the home directory for each account is set via a field in the /etc/passwd file (the main repository for all user account information --- ironically the one vital bit of account data that is normally no longer stored in /etc/passwd is the user's password hash; but that's a long story).
Under Linux is is common to have all of the user home directories located under /home. This should be on it's own filesystem (partition) or it should be a symlink to some directory that is not on the root filesystem. Actually the whole issue of how filesystems should be laid out is frought with controversy among sysadmins and techies. There is a relatively recent movement that says: "just make it all one big partition and forget about all this fussing with filesystems."
Anyway, you are free to configure your filesystems pretty much any way you want under Linux. You can have several hard drives: two per IDE channel (/dev/hda and /dev/hdb for the first controller, /dev/hdc and /dev/hdd for the next, and so on), 7 for each traditional SCSI host, and 15 for the "wide" controllers (/dev/sda, /dev/sdb, etc). Each hard drive can have up to four primary partitions (/dev/hda1, /dev/hda2, etc) one of which can be an "extended partition container" (actually there are apparently now TWO types of "extended container" partition types, so you can have one of each). The "extended container" partitions can hold a number of additonal partitons. I've heard that you can have upto 12 partitions on a drive (I don't think I've ever gone beyond 10).
Unfortunately you have to make these decisions early on (when running 'fdisk' during your Linux installation. There is an 'ext2resize' program floating around the 'net. I haven't tried it yet (maybe on my next "sacrificial" system).
So, you can limit the size of the whole home directory tree by simply putting /home on its own filesystem (and sizing it as you need).
To limit how much space individual users can consume (under their home directories or on any other filesystems) you can use the Linux "quotas" support. This involves a few steps. You much ensure that the "quotas" feature is enabled in your kernel (I suspect that Mandrake ships with this setting). Then you want to read the instuctions in the Quota mini-HOWTO at http://www.linuxdoc.org/HOWTO/mini/Quota.html
Once the kernel support is there basically you do the following:
*) Create a couple of (initially empty) files at the
root of each partition (fs) on which you wish to enforce quotas.
*) Edit your /etc/fstab file to add the usrquota and/or grpquota mount options to each these filesystems
*) Run the command 'edquota' (with the -u or -g option
for user or group quotas respectively) and create a
series of text entries to describe your quota policies in the appropriate syntax.
*) Ensure that the "quotaon" command is run by your
system startup scripts (the init or "rc" scripts).
(This is probably also already being managed by your distribution).
Note that the mini-HOWTO is good, but you must follow it carefully. Be particularly carefull about the syntax you use in these quota files.
The whole affair is further complicated by the existence of both hard and soft quotas. Bascially you can set two different limits on each user or group's utilization of the space on each of your filesystems. The "soft quota" marks a point at which the users will start to get warnings while the hard quote marks a point at which attempts to create files or allocate more blocks to existing files will fail.
Read Mr. Tam's mini-HOWTO --- it's pretty old, but it has the details you need. It also shows some techniques for using on users quota configuration as a template --- so you can clone those settings to other users quickly and automatically without having to manually edit your quota files all the time.
From pundu on Wed, 10 May 2000
Hi,
I would like to know how one can calculate cpu load and memory
used by processes as shown by 'top' command. It would be nice if any one can explain me how you could do these by writing your own programs , or by any other means.
Why don't you download the sources to 'top' and 'uptime' and read them? On a reasonably modern Debian system you could just issue the command 'apt-get source procps' to have your system find, fetch, unpack and patch those. ('top', 'uptime', 'kill' and a number of other process management commands are in the "procps" package --- since these are all tools that implement process management and reporting using the /proc kernel support.
(Technically there were/are other ways to do these sorts of process management things, in cases where you don't have /proc enabled --- but they are no widely used anymore. There is a /proc alternative that's implemented as a device driver --- for embedded systems, and there's some old techniques for doing it by reading some of the kernel's data structures though /dev/kmem --- basically by using root level read access to wander around the kernels memory extracting and parsing bits of it from all over).
Your distribution probably came with sources (maybe on an extra CD) or you could always wander around Metalab (formerly known as Sunsite) http://metalab.unc.edu/pub/Linux to find lots of source code for lots of Linux stuff. You might also look at Freshmeat (http://www.freshmeat.net), Appwatch (http://www.appwatch.com) and even ExecPC's LSM (Linux Software Map) at http://www.execpc.com/lsm (You can even get 'appindex' a little curses package which can help you find apps from Freshmeat and the LSM by downloading RSS files from each of them on demand).
[ As of publication time, there's another one, called IceWALKERS (www.icewalk.com) -- Heather ]
Another good site to find the sources to your free software is the "Official GNU Web site" (http://www.gnu.org) and at the old GNU master archive site: ftp://prep.ai.mit.edu/gnu
Of course you could always compare these sources to those from another free implemention of UNIX. Look at the FreeBSD web site (http://www.freebsd.org) and its ilk (OpenBSD http://www.openbsd.org and NetBSD http://www.netbsd.org).
Of course I realize that you might not have realized that the source code was available. That's one of the features of Linux that you may have heard touted in the press. That "open source" thing means you can look at the sources to any of the core systems and packages (from the kernel, and libraries, through the compilers and the rest of the tool chain, and down into most of the utilities and applications).
I also realize that many people have no idea how to find these sources. Obviously the first step is to find out what package the program you what to look at came from. Under any of the RPM based systems (S.u.S.E. Red Hat, TurboLinux, Caldera OpenLinux, etc) you can use a command like 'rpm -qf /usr/bin/top' to find out that 'top' is part of the procps package. Under Debian you could install the dlocate package, or use a command like 'grep /usr/bin/top /var/lib/dpkg/info/*.list' or one like 'dpkg -S bin/top' (note I don't need a full path in that case). All of these will give you a package name (procps in this case). Then you can use the techniques and web sites I've mentioned above to find the package sources.
Incidentally the canonical (master) URL for procps seems to be:
ftp://people.redhat.com/johnsonm/procps/procps-2.0.6.tar.gz
... according to the Appindex and LSM entries I read).
From Drew Jackson on Sun, 14 May 2000
Dear sir:
I have recently installed an anti-virus software program that is
executed from the command-line. I would like for this service to run at regular intervals. (i.e. every 2 hours)
I am using a Red Hat 5.2 based platform without GUI support.
Thank you for your time and effort.
Sincerely, Drew Jackson
Short answer: use cron (the UNIX/Linux scheduling daemon/service).
The easiest way to do this would be use a text entry to the /etc/crontab file that would look something like:
0 */2 * * * root /root/bin/vscan.sh
(Obviously you'd replace the filename /root/bin/vscan.sh with the actual command you need to run, or create a vscah.sh shell script to contain all of the commands that you want to run).
This table consist of the following fields: minute, hour, day of month, month, day of week, user, command. Each of the first five fields is filled with numerics, from one or zero. So the minutes field is 0-59, from the first to the last minute within any hour. The "*" character means "every" (like in filename globbing, or "wildcard matching"). The hours are 0-23, the dates are from 1-31, etc. The syntax of this file allows one to specify ranges (9-17 for 9:00 am to 5:00 pm for example), lists (1,15 for the first and fifteenth --- presumably one you'd use for dates within a month), and modulo patterns (such as the one in my example, which means "ever other" or "ever even"). So, to do something every fifteen minutes of every other day of every month I'd use a pattern like: '*/4 * */2 * * user command'.
The day of week and the months can use symbolic names and English abbreviations in the most common versions 'cron' utility (by Paul Vixie) that are included with Linux distributions.
You can read the crontab(5) man page for details. Note that there is a 'crontab' command which has its own man page in section one. Since section one (user commands) is generally searched first --- you have to use a command like: 'man 5 crontab' to read the section five manual page on the topic. (Section five is devoted to text file formats --- documenting the syntax of many UNIX configuration files).
This system is pretty flexible but cannot handle some date patterns that we intuitively use through natural language. For example: 2nd Tuesday of the month doesn't translate directly into any pattern in a crontab. Generally the easiest way to handle that is to have a crontab entry that goes off the minimal number of times that can be expressed in crontab patters, and have a short stub of shell code that checks for the additional conditions.
For example, to get some activity on the second Tuesday of the month you might use a contab entry like:
* * * * 2 joe /home/joe/bin/2ndtuesday.sh
which runs every Tuesday. If we used a pattern like:
* * 7-13 * 2 joe /home/joe/bin/2ndtuesday.sh
... our command would run on every Tuesday and on each of the days of the second week of the month (from the 7th through the 13th). This is NOT what we want. So we use the former pattern and have a line near the beginning of our shell script that looks something like:
#!/bin/bash # Which week is this? weeknum=$[ $(date +%e) / 7 + 1 ] ## returns 1 through 5 [ "$weeknum" == 2 ] || exit 0 # Rest of script below this line:
Of course that could be shortened to one expression like:
[ "$[ $(date +%e) / 7 + 1 ]" == 2 ] || exit 0
... which works under 'bash' (the default Linux command shell) and should would under any recent version of ksh (the Korn shell). That might need adjustment to run under other shells. This also assumes that we have the FSF GNU 'date' command (which is also the default under Linux).
Of course, if you were going to do this more than a few times we'd be best off writing one script that used this logic can calling that in all of our crontab entries that needed it. For example we could have a script named 'week' that might look something like:
#!/bin/bash ## Week ## Conditionally execute a command if it is issued ## during a given week of the month. ## weeks are numbered 1 through 5 [ $# -ge 2 ] || { echo "$0 requires least two args: week number and command" 1>&2 exit 1 } [ "$(( $1 + 0 ))" == "$1" ] &> /dev/null || { echo "$0: first argument must be a week number" 1>&2 exit 1 } [ "$[ $(date +%e) / 7 + 1 ]" == "$1" ] || exit 0 shift eval $@
... or something like that.
(syntax notes about this shell script: '[' is an alias for the 'test' command; '$#' is a shell scripting token that means "the number of arguments"; '||' is a shell "conditional execution operator" (means, if the last thing returned an error code, do this); '1>&2' is a shell redirection idiom that means "print this as an error message"; '$[ ... ]' and '$(( ... ))' enclose arithmetic expressions (a bash/ksh extension); '$@' is all of our (remaining) arguments; and the braces enclose groups of commands, so my error messages and exit commands are taken together in the cases I've shown here).
So this shell script basically translates to:
If there aren't at least 2 command line arguments here, complain and exit. If the first argument isn't a number (adding 0 to any number should yield the same number) then complain and exit. If the week number of today's date doesn't match the in the first argument then just exit (no complain). Otherwise, forget that first argument and treat the rest of the arguments as a command.
(Note: cron automatically sends the owner of a job e-mail if the command exits with a non-zero (error) value or if it produces any output. Normally people write the cron job scripts to avoid generating any normal output --- they either pipe the output into an e-mail, redirect it to /dev/null or to some custom log file; and/or possibly add 'logger' commands to send messages to the system logging services ('syslog'). E-mail from 'cron' consists of some diagnostics information and any output from the job).
In some fairly rare cases it would be necessary to wrap the target command, or parts of it in single quotes to get it to work as desired. Those involve subtleties of shell syntax that are way beyond the task at hand.
A more elaborate version of that shell script might allow one to have a first argument that consisted of more than one week number. The easiest way to do that would be to require that multiple week numbers be quoted and separated with spaces. Then we'd call it with a command like 'week "1 3" $cmd' (note the double quotes around 1 and 3).
That would add about five lines to my script. Anyway, I don't feel like it right now so it's left as an exercise to the reader.
Anyway, 'cron' is one of the most basic UNIX services. It and the related 'at' command (schedule "one time" events) are vital system administration and user tools. You should definitely read up on them in any good general book on using or administering UNIX or Linux. (I personally think that they are woefully underused judging from the number of "temporary" kludges that I have found on systems. Hint: every time you do something that's supposed to be a "temporary" change to your system --- submit an 'at' job to remind you when you should look at it again; maybe to remove it).
BTW: I'd suggest that you seriously consider upgrading to a newer version of Linux. Red Hat 5.2 was one of the most stable releases of Red Hat. However, there have been many security enhancements to many of the packages therein over the years.
From Romulus Gintautas on Sun, 14 May 2000
First off, thank you for your time.
I did a man on ls but did not find what I was looking for. I'm looking for a linux equivalent of dir /s (DOS). Basically, I am looking for a way to find how much data is stored in any specific dir in linux (red hat 6.0). As you know, in dos, all you do is enter the dir in question and just do dir /s.
Under UNIX we use a separate command for that.
You want the 'du' (disk usage) command. So a command like:
du -sck foo bar
... will give you summaries of the disk usage of all the files listed under the foo and bar directories. It will also give a total, and the numbers will be in kilobytes. Actually "foo" and "bar" don't have to be directory names; you can list files and directories --- basically as many as you like. Of course you can mix and match these command line switches (-s -c -k, and many others).
To work with your free disk space you can use the 'df' (disk free) command. It also has lots of options. Just the command 'df' by itself will list the free disk space on all of your currently mounted regular filesystems. (There are about a half dozen psuedo-filesystems, like /proc, devpts, the new devfs and shmfs and some others that are no listed by 'df' --- because the notion of "free space" doesn't apply to them).
Anyway, read the man pages for both of these utilities to understand them better. Read the 'info' pages to learn even more.
Incidentally --- if you want to get more detailed information about a list of files than 'ls' can provide, or you need the meta information in custom format then you usually want to use the UNIX/Linux 'find' command. This is basically a small programming language for "finding" a set of files that match a set of criteria and printing specific type of information about those files, or executing commands on each of them.
In other words 'find' is one of the most powerful tools on a UNIX system. As a simple example, if I want to find the average file sizes of all of the "regular" files under a pair of directories I can use a command like:
find foo bar -type f -printf "%s\n" | awk '{ c++; t+= $1 }; END { print "Average: ", t/c }'
The 'find' command looks at the files/directories named "foo" and "bar" finds all of them that are of type "f" (regular files) and prints their sizes. It doesn't print ANYTHING else in this case, just one size in bytes, per line. The 'awk' command computes the average (awk is a little programming language, simpler than PERL).
To find all of the files older than one week in the current directory you can use a command like:
find . -ctime +7
... for those that are newer than a week:
find . -ctime -7
... (BTW: UNIX keeps three timestamps on its files,
ctime is the timestamp on the "inode" --- when the file's meta-data was modified, the mtime is the timestamp for the file's data when the data blocks OR meta-data were touched and atime is the last "access" (read) time).
I think the current version of GNU 'find' has about 60 options and switches (including support for -and, -or, and -not for combining complex expressions) and the -printf and -fprintf directives support about 25 different "replaceable parameters" and a variety of formatting options within some of those.
About the only bit of 'stat' information I can't get write from 'find' is the "device number" on which a file resides. (Under UNIX every file can be uniquely identified by the combination of device number and inode. inodes are unique within any given device). 'find' also doesn't (yet) give me the ability to print or test some special "flags" (BSD UFS) or "attributes" (Linux ext2).
I've been meaning to write a custom patch to add those features.
I apologize if this is a simple question. I am just starting in Linux and hope to learn a lot more.
Rom
That's O.K. I'm too tired to do hard questions at the moment.
From Rik Heywood on Mon, 15 May 2000
I am trying to create a shared library that my other programs can load and use. I have managed to get this to work and all is well. However, I am trying to limit the functions that the library exports to just the ones I want. On win32 there are numerous ways of achieving this (eg listing the functions you want to export in a .def file, adding __dllexport to the function definition). I feel sure it will be possible in Linux, but so far I have been unable to figure it out. Any ideas?
Rik Heywood.
I don't know why you'd do such a thing. It can't possibly be used for any security purpose (either someone or some program has read/execute permission to the whole shared library, or not).
From what I gather you "export" a C function from a library by documenting its interface in a header (.h) file. Frankly even if the feature exists I think it would be of dubious value. If you limit access to some function you must make the programmer re-implement it in their space (goes against code re-use). If the do that then they've forked to functionality and any refinement of the function(s) must now be done in multiple places (bad for maintainability). If you are simply trying to discourage the use of some internal interfaces (since they may change and you don't want to be saddled with backward compatabilty responsibilities in those particular areas) then just comment and document them as internal (in your sources) and separate their prototypes into a different set of header files (which are not installed into the public include directory tree).
However, I'm not an expert. In fact I don't even consider myself to be a professional programmer (though I've done a bit of it here and there). So it's certainly possibly that everything I've just said is idiotic gibberish. (Of course that would be possible even if I was a recognized expert).
As for the fact that this "feature" exists in Microsoft DLLs and programming tools --- it sounds like it's probably primarily useful if you need to create binary products that take advantage of "hidden" (undocumented) private interfaces which you plan to keep from your competitors.
From Charles Gratarolli on Mon, 15 May 2000
Hi,
After a few crashes I managed to install a Corel Linux in my machine(Pemtium II 450, IDE Drive, 96 MB of memoy). When the system asked me for a login and password, it didn"t recognize and gave me the following messsage:
COREL LINUX 1.0(tty1)
Login:XXXXX(I gave it in the insyalation beggining) Password: Incorrect login
This is fairly difficult to read and I'm not sure of the context. I think you are saying: "After I installed Corel Linux and rebooted the system, I tried to enter the same name and password at its login prompt that I had entered during the installation process." (I've only installed Corel Linux a couple of times I don't remember the exact sequence of installation dialogs).
Normally you'd have been prompted to create a password for 'root' (the system administration account) and you would have been offered a chance to create one or more user accounts --- which involve selecting at least a user name and initial password for each account. (Usually there's also a chance to fill in a full name, change the account's "home directory" and "login shell" settings, set the account's primarly group membership and possibly add that account to a list of other groups).
What happens if you use the name 'root' (all lower case, no capital letters) at the "Login:" prompt and enter your password for that account? (BTW: It's a good idea to keep those passwords different. It's a wretched idea to login as 'root' when you want to run "normal" applications like a web browser, mail program etc).
I left the password blank, as was said in the manual
Did the manual really suggest that you should leave a password blank? That's irresponsible.
For situations when you really want to have a service accessible from the console with no password, it is better to configure the system to skip the password request than to set the password to be empty. Basically a username/password combination can potentially be used to access any service on a Linux/UNIX system. Usernames are fairly easy to find for a system, so it is almost impossible to enforce any security policy on an account with no password. If you want a service or program to be accessible without a password it's almost certain that you want to limit the access to specific files (i.e. just your HTML files in your document root directory tree), through specific means (i.e. just through the web server, for read-only access), etc.
Anyway, many Linux systems are configured to forbid blank passwords. Thus, it may be that the installation program let you leave the password blank while the login program(s) are enforcing this common policy.
How can i change it now? considering I am a newbie.....
Thank you Charles G.
It depends. Is this a user account? Does logging in as 'root' work? If so, then just login as the root user (and open a "terminal" or "xterm" window if you've logged into a GUI) so you can type in commands.
First you need to know if the account you created exists.
Let's say you created your account name using your initials "cg." So you might use a command like:
grep cg: /etc/passwd
... if that doesn't pop-up a line that looks something like:
cg:x:0:0:Charles G:/home/cg:/bin/bash
... then you don't have a user account (or you mistyped something --- possibly when you created the account, or whatever).
You can create a user account using a command like:
useradd -m cg
... the -m tells 'useradd' to "make" a home directory for the new account. There are many options to the 'useradd' command. You can read more than you want to know about them by typing:
man useradd
Once you've created the account you can set the password using a command like:
passwd cg
... which, if done as 'root' will simply prompt you for a new password and ask you to repeat it. If you can type in the same string twice consecutively --- you will have successfully changed or set the password for that account.
You can also use the passwd command to change your own password by simply typing it (with no parameters or arguments). In that case it will require you to type your old password, and then repeat your new password twice.
Note that sometimes the 'passwd' command will complain that a password is "too short" or "too weak" or that it is "based on a dictionary word." The Linux 'passwd' command tries to enforce some "best practice" policies about your users password selections in order to make the system more secure. Basically anyone who cracks into a user account on a system has a pretty good chance of using that to take control of the whole system eventually. (Also they can do quite a bit of damage to that user's files and quite a bit of snooping about in that users e-mail etc. even if they don't manage to disrupt other users or the system itself).
I realize that you may not care about all this "security stuff" as a new Linux user. After all, you're probably adopting Linux after years of using MS Windows, which has no concept of users and makes no effort to protect the system from "normal users" or to protect any one users stuff from any other.
However, it's a good idea to take a lesson from Microsoft's mistakes. You may want to considering having one account on your system for reading mail, a different on for doing your web browsing, another for playing games, and yet another for any of your important work. (With a little practice it's possible for these to share data without too much inconvenience while limiting the damaged that a trojan horse (such as the ILOVEYOU e-mail virus) could do to your other work.
(Of course Linux systems are unaffected by ILOVEYOU, Melissa and all of the other e-mail trojan/viruses so far. However, such a problem might eventually affect some Linux users. Luckily there are many different e-mail packages in widespread use under Linux --- any bug that could be used to exploit one is very unlikely to affect more than a small fraction of the total population. This "technodiversity" (analogous to the "biodiversity" that we need in our ecosystems) does protect us somewhat --- since the infection can't spread quickly or easily unless there is a critically high percentage of "monoculture" applications users).
(I could write a long article on the pros and cons of technodiversity vs. standardization and code re-use. However, I have a feeling that it not be of much immediate interest to you).
Getting back to your problem. If you don't have a working root password then the job is a little more difficult. Basically you need to boot up the system in "rescue mode" or from a "rescue disc or diskette" mount the root filesystem, possibly mount a "/usr" filesystem on top of that, run the 'passwd' command, unmount the filesystems that you brought up, and restart the system from its hard drive.
Whoa! Did you get all of that? I didn't think so. Here's the same sequence again, with a little more explanation:
- Boot up the system in a "rescue mode" from a "rescue disc or diskette"
If you see the "LILO:" prompt while you're booting up the system you can usually hit the [Caps Lock] or the [Scroll Lock] key or just start typing to force the boot loader to pause at this point.
From there you can tap the [Tab] key to see a list of boot image "labels" (usually one will be named "Linux" or "linux").
From this prompt you can type a command like:
linux init=/bin/sh rw
... to bring up the system in a "rescue mode."
This will bypass the whole normal startup sequence and prevent the system's normal initialization program (init) from spawning the 'getty' processes that take over the console and force you to login.
BTW: It's possible to set another password on your LILO boot loader (adding a line to your /etc/lilo.conf) that would prevent this trick from working. That password, if set, would not convey any other access to the system, it would only allow one at the console during the boot up cycle to select and over-ride the boot settings.
The "rw" at the end is a convenience to make sure that the main (root) filesystem is brought up (mounted) in a read/write mode. Normally a UNIX/Linux system comes up with the root filesystem mounted read-only so that it can be checked and repaired.
- ... or from a "rescue disc or diskette"
You might have been offered a chance to make a custom rescue diskette during your installation. If you were wise you did.
If you system can boot from a CD drive then your distribution's CD usually can act as a "rescue disc." So you act as though you're going to re-isntall, but you use the keys [Alt]+[F2] (hold down the [Alt] key and hit the [F2], second function, key).
If that doesn't work, boot the system up under some other operating system or use a different computer and look for a "rescue diskette" image. Hopefully the instructions for that will be listed somewhere in your manual or on the web site for your favorite distribution. (Of course Corel's site is basically impossible to navigate if you're looking for technical support information specifically about their product. I doesn't seem to have a search engine and I don't see a link to a simple "Corel Linux FAQ").
Failing that look at Tom Oehser's site for his "Root/Boot" floppy (http://www.toms.net/rb) Unfortunately this is NOT a package for newbies.
- (maybe) mount the root filesystem,
If you booted from a rescue diskette you'd normally be running from a RAM disk. So you have to find your main (root) filesystem and mount it up. On a typical Linux system that would involve a command like:
mount /dev/hda1 /mnt
You need to know what type of hard drive you have (/dev/hd* for IDE, /dev/sd* for SCSI), which one it is (a for the first drive on the primary controller, and letters b, c, d, etc for others), and which partition it's one (1 through 4 for the primary partitions, and 5-12 or so for any logical drives in an extended partition).
Once you done that you should change into that directory (/mnt in my example and in most cases) and make that the "virtual" root directory using the following commands:
cd /mnt chroot . /bin/sh
- possibly mount a "/usr" filesystem on top of that
Even if you booted from the hard drive using the init=/bin/sh trick, you may have to bring up another filesystem. The 'passwd' command is usually in the /usr/bin directory, and the /usr directory is often separated unto its own filesystem. (It's traditional though there are good reasons for this as well).
Here's the command to do that:
mount /usr
- run the 'passwd' command,
Finally you should be able to run the 'passwd' command to set a new password for yourself.
If you get some sort of error about a "read-only" filesystem then you probably forget the rw option at your LILO prompt. Use the following command:
mount -o remount,rw /
and try again.
- unmount the filesystems that you brought up,
If that was successful then you should be able to unmount any filesystem that you mounted:
umount /usr
... and if you were booted from a rescue diskette or CD:
exit; umount /mnt
... or if you were booted from the hard drive:
mount -o remount,ro /
This sets up all of the filesystems so that they are "clean" and can be used immediately after the next step without a time-consuming consistency check.
- restart the system from its hard drive.
Finally you should be able to reboot. This is actually a bit trickier than you'd think when you've booted into this "rescue mode." (If you booted from a diskette or CD, just pull that out and hit the reset switch).
If you've booted from your hard drive using the init=/bin/sh trick (what I call "rescue mode" then you should shutdown and restart the system with the following command:
exec /sbin/init 6
... this is because the various sorts of 'shutdown' and 'reboot' commands usually are just sending a "signal" and performing some IPC (interprocess communications) with the 'init' program. In other words, normally only the init program does a reboot or a system halt (or changes "runlevels" --- operational modes). However, we bypassed the normal process and we're running a command shell instead of init. The shell isn't programmed to respond to signals by reading the /dev/initctl pipe (FIFO) for messages.
We can't just "run" init like a normal program. init detects what process ID it is running under and only assumes system control if it is process ID number 1 (PID ==1). If not then it acts as a messenger, trying to pass signals and commands to the "real" init process. However, our shell is running as PID 1 --- so we need to tell the shell to "chain over" or "replace its code with" that of init.
I realize that all of that was pretty complicated. You don't have to understand the inner workings of init in order to run this last command or to follow most of this procedure.
It won't even be the end of the world if you just hit the red switch and reboot the system. However, I've tried to make this set of instructions simple enough and general enough that it will work on most Linux systems.
If you get too stuck, call tech support. I see that Corel offers a fee-based North American telephone technical support option at about $50 per incident (I guess that would be in U.S. dollars). Of course my employer Linuxcare (http://www.Linuxcare.com) also offers per incident fee-based support as well. You could call them at 1-888-LIN-GURU for details.
There are also many Linux consultants that might be able to help you, possibly in person. Look at the Linux Consultants HOWTO (http://www.linuxports.com/howto/consultants/Consultants-HOWTO.html)
From Anthony Kamau on Tue, 16 May 2000
I have Linux installed on the 1st hard drive and want to boot to windoze on the 2nd hard drive. I read somewhere that I could fool windoze into thinking that it is on the first harddrive by changing a few parameters in the "lilo.conf" file. Would you happen to know what I need to add to the this file in order to have it dual boot.
Thanks, Anthony.
I don't know. But I don't recommend this way of doing things. MS Windows and other Microsoft products are somewhat brittle and it's a bad idea to try to fool them. Even it it works for some situations, for awhile, it can break as the situation changes and whenever you upgrade any of their products.
So, I'd really suggest putting Linux on the second drive and letting MS Windows have the first drive. Linux is very flexible and is far less likely to break something like this from one upgrade to the next (and you'll always have the Linux sources to fix anything that we do break).
Remember, if you have any problems with LILO on a system where you are running any sort of MS-DOS derivative --- take the easy way out and run LOADLIN.EXE. It's often much easier then fussing with boot records. In the worst case, use a SYSLINUX or LILO floppy and boot Linux from that.
I wonder how many people know how to get the most from the power of X - it really sets Unix apart from simple windowing PC's. Here is a tip that I've been using for years - maybe it will be news to others as it's not really documented anywhere for the average user, it's rather buried in the man pages.
To set the scene, poor old dad often has to stand aside to let the rest of the family read their email, do their homework etc. This is a bit of a fag on certain well known proprietary windowing systems as you would have to
save your work exit all applications, log out, let them play through, log them out, log back in restore all applications
Rather than do all this, I simply create a new X session with the following command:
X :1 -query raita &
where 'raita' is the name of my computer. A new X server starts up and the visitor can log in and do their stuff. We can flip between their session and my own with Ctrl-Alt-F8 and -F7. When they are finished, they simply hit Ctrl-Alt-BackSpace or log out and I warp back to my own workspace with Ctrl-Alt-F7.
No loss of data, no messy loging in and out.
You need to be running an XDMCP session manager (e.g. xdm, gm or kdm) for this to work. You are using XDMCP if you get a graphical logon at bootup. If you have a text-mode logon and run X with startx then you might need to modify this approach.
I also use this neat feature of X at work - we have many Unix systems that I need to log into from time to time - Linux, Solaris and UnixWare. I could use rlogin, rsh or xrsh but for some jobs nothing beats a full X session.
I can flip from one system to another by creating new X sessions on my Linux workstation. Normally at work I use a slightly modified command:
X :1 -indirect dun &
... where dun is runnning an XDMCP server (like xdm, gdm or kdm). It then gives me a chooser that I can use to pick which system to log into.
I often have many such sessions at once - just increment the display number for each and they map to different 'hotkeys':
X :1 -indirect dun .... Ctrl-Atl-F8 X :2 -indirect dun .... Ctrl-Atl-F9 X :3 -indirect dun .... Ctrl-Atl-F10
with Ctrl-Alt-F7 being the default X display :0
Another ploy is to use Xnest in a similar way. Instead of getting an extra X server, Xnest runs a new X session in a window. I use this:
Xnest :1 -indirect dun &
or, if I want to use a full-sized screen I use:
Xnest -geometry 1280x1024+0+0 :1 -indirect dun &
There are some minor issues with font sizes when using a smaller window but generally not too bad.
If you get tired of typing "/etc/init.d/apache reload" every time you change your Apache configuration, or if you frequently start and stop squid (e.g., to free up memory for extensive image editing), use shell functions to take the tedium out of typing.
The following functions allow you to type "start daemon", "stop daemon", "restart daemon", and "reload daemon" to accomplish the same thing. They should work on Debian or a similar system which has a script for each daemon in /etc/init.d/, where each script accepts start, stop, restart and reload as a command-line argument.
I used zsh
, so I put the following in my
/root/.zshrc:
function start stop restart reload { /etc/init.d/$1 $0 }This creates four functions, each with an identical body. $0 is the command name (e.g.; "start"); $1 is the first argument (the name of the daemon).
The equivalent functions in
bash
look like this:
function start { /etc/init.d/$1 start; } function stop { /etc/init.d/$1 stop; } function restart { /etc/init.d/$1 restart; } function reload { /etc/init.d/$1 reload; }
bash
puts "-bash" into $0 instead of the command name.
Perhaps there's another way to get at the command name, but I just chose to
make four functions instead.
Debian actually puts the name of the package in
/etc/init.d/; this may be different than the name of the
daemon. For instance, the lpd
daemon comes from a package
called lprng. An enhancement to the functions would be
to recognize lpd, lpr and lp
as synonyms for the easily-forgotten lprng.
Shane Kennedy <skenn@indigo.ie> asked the Answer Guy:
How do I switch off the shell screensaver?
setterm -blank 0
It's a feature of the Linux console driver, not the shell.
Thu, 04 May 2000 08:34:09 -0500
From: Christopher Browne <cbbrowne@hex.net>
This can refer to to things:
a) The fact that Linux kernel releases are split into
"stable" and "experimental" releases.
Thus, versions numbered like 1.1.n, 1.3.n, 2.1.n, 2.3.n represent
"experimental" versions, where addition of new
functionality is solicited, whilst those numbered 1.0.n, 1.2.n,
1.4.n, 2.0.n, 2.2.n, 2.4.n represent "stable" versions,
where changes are intended to only be made to fix problems.
Occasionally, "experimental" functionality gets
backported to the "stable" releases, but this is not
the norm.
b) There is a theory that, at some point, development of Linux
could "split" to multiple independent groups.
For instance, there are some people working on functionality
intended to support big servers (e.g. - SMP, various filesystem
efforts). And there are others building functionality supportive
of tiny embedded systems (Lineo, Embeddix, ...)
The theory essentially goes that since their purposes are
different, there may be some point at which the needs may diverge
sufficiently that it will not make sense for there to be a single
point of contact (e.g. Linus Torvalds) to decide the direction of
development of _THE_ official
Linux kernel.
What might happen is that a group would take a particular version
of the Linux kernel source code, and start developing that quite
independently of another.
For instance, there might be a "split" where the
embedded developers start developing the kernel in a way attuned
to their needs.
This is _essentially_ what happened when OpenBSD
"split" off of the NetBSD project; the developers
concluded that they could not work together, and so a new BSD
variant came into being.
The use of the GNU General Public License on the Linux kernel
does mean that it would be legally permissible for a person or a
group to perform such a "split."
It would, however, be quite _costly_, in that it would mean that
the new group of developers would no longer have much benefit
from the efforts of people on the other side of the split. It is
a costly enterprise (whether assessed in terms of money, or,
better, time and effort) to keep independent sets of source code
"in sync" once they are purposefully taken out of sync.
Hope this helps provide some answers to the question...
Date: Sat, 13 May 2000 15:57:49 -0400
From: Tony Arnett <lkp@bluemarble.net>
Tip given on Linux systems that do not recognize the total
almount of available ram.
The tip given was to insert the following param into
"lilo.conf"
append="ram=128M"
I had no such luck with this param. I think the proper param to
use is:
append="mem=128M"
This worked for me on my Gentus Lunux 1.0 System.
Here is my entire lilo.conf
boot = /dev/hda
timeout = 50
prompt
default = linux
vga = normal
read-only
map=/boot/map
install=/boot/boot.b
image = /boot/vmlinuz-2.2.13-13abit
label = linux
initrd = /boot/initrd-2.2.13-13abit.img
root = /dev/hda5
append="hdc=ide-scsi hdd=ide-scsi mem=128M"
other = /dev/hda1
label = win
I hope this will help someone.
Lost Kingdom Productions
Tony Arnett
[It is definitely
append="mem=128M"
as you say. I use it myself. The only instance of "ram=" I could find was in http://www.linuxgazette.com/issue44/tag/46.html, and it is quoted in part of the question, not as the answer. If there are any other places where it says "ram=128M", please let me know where and I'll fix them immediately.
I looked in the Bootprompt-HOWTO http://www.ssc.com/mirrors/LDP/HOWTO/BootPrompt-HOWTO.html and did not see a "ram=" parameter. There are some "ramdisk_*=" parameters, though, but that's a different issue. -Ed.]
Re: Command line editing
Wed, 17 May 2000 08:38:09 +0200
From: Sebastian Schleussner Sebastian.Schleussner@gmx.de
I have been trying to set command line editing (vi mode) as part of
my bash shell environment and have been unsuccessful so far. You might
think this is trivial - well so did I.
I am using Red Hat Linux 6.1 and wanted to use "set -o vi" in my
start up scripts. I have tried all possible combinations but it JUST DOES
NOT WORK. I inserted the line in /etc/profile , in my .bash_profile, in
my .bashrc etc but I cannot get it to work. How can I get this done? This
used to be a breeze in the korn shell. Where am I going wrong?
Hi!
I recently learned from the SuSE help that you have to put the
line
set keymap vi
into your /etc/inputrc or ~/.inputrc file, in addition to what
you did
('set -o vi' in ~/.bashrc or /etc/profile)!
I hope that will do the trick for you.
Cheers,
Sebastian Schleussner
The Open ISP Project |
Preface
Free Internet access. It's a sentence we hear everywhere. With the proliferation of ISPs Internet access is getting hot... I mean, cool. Whatever. Prices are going down everyday even more. But there's a limit. We always have to pay the Phone Company for our "free" Internet time. In countries where there is a PSTN Monopoly, usually the end user is abused from the almighty Phone Company. And in countries where local phone calls are free, users always have to pay the ISP. Even if you are OK with that, we all must acknowledge that being Linux users we get marginal support from our ISPs. Yes, there are a lot of Linux-friendly ISPs, but what about the power features, like encrypted PPP sessions, or Serial Load Balance? There's even a new modality: advertisements sponsored ISPs. Just by loading a "bar" which display ads while you use the ISP, you get "free" internet access (phone call charges vary depending on your zone coverage or country). Of course, there is no Linux version of such services, ad even if they existed, would you agree on eating their ads? In this article, I want to begin a discussion of why the Linux community needs a truly zero-cost and feature rich ISP, and how such a project would benefit the entire Linux community, our own countries, and the IT world in general. To reach this goals, I believe the zero-cost ISP project should be Linux-Only oriented. Keep on reading, and I'll expose why.There have been efforts all around the world to bring Internet access costs down, namely "Plain rates"; some have partially succeeded, some others not. Why? Because THEY DON'T HAVE THE COMMUNITY SUPPORT/UNITY WE HAVE.
1. Why Linux needs a zero-cost ISP
The need is out there... here is what I believe a zero-cost ISP can do for Linux and for Nations:1.1. Nurture the new Linux minds: as an intelligent species, we nurture the youth to become the next generation of leaders and supporters of our society. If we provide the means for our kids and teenagers to learn and develop themselves we will be a successful society in the long run. Professional Soccer/Baseball clubs have early leagues where kids a grown up enhancing their skills. Those who invest in the young ones are the ones who survive. I'm not worried about Linux survival, but it's certain that by now we are still a minority. And in this new IT era, we need the people to support all the infrastructure we are building today, ten years from now, even less. Who in the family are the ones with less priority to use Internet, the computer or the Phone? Kids. As simple as that. If you pay the phone by the minute, most parents wouldn't like their kids spend hours online. And if parents use the computer regularly, the kids must get away from it. And most parents consider a computer a too expensive toy to buy a new one for the kids. If we, as a community, nourish our youth, we are going to have an inevitable success. Many people discuss, these days, about winning the desktop war. Give kids Linux and we'll se five years from now.
1.2. Bring Enlightenment: how to expand our user base: people use what they are given to use. If you buy a new computer, which OS will you get by default? I know this story is ending, with recent support from Hardware integrators, or, for instance, the deal between Corel and PC Chips. OK, from now on people will have a choice, although not so soon. We as a community must develop some strategy to attract people to our OS. How? Give them free, I mean _free_, Internet access. We have to give people a reason to use Linux. We have lots of ISPs around, each one trying to have new customers using different strategies and features. They distribute Windows-only setup CD-ROMs to ease the subscription process. And most of them claim "Free Internet Access". What it really is is a half-truth. You still have to pay the phone call. There are some others that give you one "free month", and then they charge you for a maximum amount of time online per month, being the phone call free. Ok. But what they don't tell you, is that you pay the phone calls during this "free month". I just feel sick with all those half-truths, or should I call them "half-lies"? Isn't a half-lie a lie anyway? Now, imagine, we provide a truly free and unlimited (this point we have to discuss. Remember, I'm just trying to build a discussion around this subject) Internet access to anyone who wants, only if they use Linux. I mean, like M$-Chap (fortunately pppd can deal with that), we can develop some Linux-Chap, but I don't think it's ethical; or is it? In case it's not ethical, maybe accepting only ppp-encryption or bsd/slhc compression capable clients. We have to address all this technical details in a forum. But the main idea is: "Use Linux, and you'll have free unlimited Internet access. Just by using it on your computer". We already have everything to fill the needs of end users: Web Browsers, Office Suites, Drawing tools, etc., and more is coming.
1.3. More bugs hunted - more eyes on the source code: if we bring more people to Linux, we'll get more people interested on studying it's internals, learning to program, developing programs. I know not everyone, but if we get just one out of a thousand, and we get some more millions of new users, it looks pretty sexy, eh? And if we give them a way to download more source core or binaries per unit of time, in the long run we'll have more developers and/or bug reports. Just by reporting bugs, or what they dislike/need from our OS, evolution is going to accelerate. And remember, we won't have Linus or Alan or thousands others forever (what a sad life without them). We need to plant the seed for the new generations to come. By giving users a free High Quality OS, and free Internet access, don't you think that someday they will want to give something back to the community? That's how Linux works: we all are trying to give something back to the community. Those of you reading these lines, aren't you trying the same everyday? That's why we have copy parties, mailing lists, newsgroups, etc. We are a gift community and a bazaar community.
1.4.
Provide our community with
a unified local repository of software - faster downloads: in many countries
there are not unified national backbones. Academical networks and commercial
ones have not a common backbone, or are in the process of doing so. Around
the world we have hundreds of mirrors of Linux repositories, but when it
comes to a single country, maybe the user and the mirror and in different
networks, thus having slow downloads, although the mirror is in the same
country. I don't pretend to abolish existing mirrors, but to provide by
the zero-cost ISP project a nation-wide ISP with all the necessary Linux
resources. People won't _have_ to use it, it's just a choice, and a fast
choice. The 0800-Linux ISP must be nationwide to achieve this goal. Besides,
the PPP link can be established with extra compression (not just IP headers),
thus giving a phenomenal throughoutput. And let's add to this the chance
to have two phone links using Serial Load Balance (an option in the kernel).
Should this ISP include ISDN/xDSL service? In the beginning maybe not,
due to increased costs, but it's just a matter of counting the demand for
it. It's another issue to discuss in this project.
And last but not least: faster downloads mean efficiency
-then economy- to the Open ISP's budget!
1.5. Give privacy to people: what is your ISP doing with your data? And your mail? Do you think your actual ISP protects your privacy? I don't know for sure, but I don't think so. What about Massive Web Tracking Via ISPs, with technologies as Predictive Networks? Have you ever heard of the Echelon Project? "The big brother is watching you", remember? The 0800-Linux ISP project can help us reach a decent level of privacy. How? With encrypted PPP links, educating our users to use PGP/GPG, giving free web mail a la hushmail.com or through SSL. It's a very simple way to encourage users to user strong encryption. Which well-known free web mail server provides users with strong encryption? Remember what happened in Hotmail some time ago, when crackers published techniques/programs to read any account's mail? If we support strong encryption this can't happen again. I also think of, let's say, an encrypted /home file system. We can think endlessly of new applications. Well, let's not forget all this is subject to government permission. There are new project laws in UK that will consider illegal denying to give decryption keys to the government, for example.
1.6. Open new business opportunities: with a large user base it's impossible not to mention the new (now not so new) and huge market it will bring. Books, Commercial Software (while we don't have free replacements we have to buy them. Think about games), more Distributions sales, support companies, huge demands for Linux-inside PCs, etc.; everything will grow, exponentially, and even unthinkable new businesses. We are in a new e-conomic era, and Linux is one of the driving forces on it. Look at the success of Linux IPOs. And we are a minority! We just have to pull the trigger. The results will overwhelm us.
1.7. Fill the demands in the IT world: lots of nations are now making plans to fill the huge demand for IT professionals. It's a problem of all developed and developing countries. The projections for IT workers shortage for the years to come is alarming. I think the Open-ISP project can play a major role in reversing this proccess: it will bring the free software community spirit to thousands of new individuals, stimulating colaborative development and user-to-user support. The more people gets access to computers and Internet, the more skilled the population.
1.8. Allow more nations to involve/profit of e-commerce: most of European nations are worried about the advantage the US has taken in e-commerce. And in the end, the final customer is the one who benefits from competition. But to fill the gap they need the human resources to build and support the infrastructure. The European Union has launched the "eEurope Initiative", to develop the course of action to adquire a competitive edge in ecommerce and new technologies.
2. Creating a zero-cost LINUX ISP
So if a zero-cost Linux ISP can benefit the Linux community, how can we raise the funds to achieve it?
2.1. Existing Linux/Open Source funds: the Open Source Equipment Exchange, the Linux Fund or Open Source Awards, like the Benie Awards by Andover.net or the EFF Pioneer Awards
2.2. Linux distributors: if we get this project to work, it is certain that the companies behind the Linux distributions are going to benefit. Nowadays, you can see boxed Linux distributions in well-known stores around Europe and South America, whereas just one and a half year ago you couldn't. Now it's easy to find bright and shiny boxes of SuSE, RedHat, Corel and Mandrake, to name a few. The main Linux distributions have showed along all these years a firm and sincere support for a vast range of projects. And they know their success depends on the user base. We just have to develop a strong project and they surely are going to help. If this project comes to life, Linux distributors could advertise "Free Internet" bundled with the product. You just install Linux, and you have free Net Access.
2.3. Linux Publishers: lots of publishing houses are having a business around Linux these days. It's more common to see new Linux books in the shelves at major bookstores. If they donate a little fraction for the sale of each book to the project, then we have more funds. We just have to get more people into the community, and books are going to start flying away from the shelves. It's inevitable. Houses like O'Reilly are well known for its support and sponsorship of projects.
2.4. Other UN*X companies: why did SUN gave StarOffice away for free? If the Linux community succeeds, Un*x will get exposed to the general public and corporations. It will strengthen Un*x acceptance. Un*x vendors will keep alive in the game. Even SGI, which is now embracing Linux instead of IRIX, will win because hardware sales make more sense to them. If Linux in general has support from this companies, the 0800-LINUX project benefits indirectly from that support. Now we have a High Quality Office Suite free to offer to the public, thanks to SUN. Maybe we can become a Sunsite partner, thus receiving hardware from the very SUN.
2.5. Hardware Integrators: if you sell a computer with free Internet access when you buy it, with no more headaches to the end user to set it up, just dialing 0800-LINUX, hey, it's a hell of a good strategy. And if you save users the cost of the OS, prices are going to be even more attractive. Hardware integrators can supply a machine with a free OS, free applications, free Office suite, and FREE Internet access... Again, the more users we attract, the more hardware gets sold. V.A. Systems, Penguin Computing, Compaq, Dell, to name just a few, all of them are in the game. They are just waiting, _waiting_, for demand to supply Linux already installed. They are tired to pay the M$ tax. They can instead, save that money and support this project with just a fraction of that. Whether it's hardware or money, we'll benefit.
2.6. The government: in high developed countries kids have computers at school. They develop their understanding and attraction to computers from early ages. Until now, the beginning of the 21st century, all the countries had access to the same kind of technology and education. Technology was easy and cheap to replicate in every country, even the poorest. And education more or less the same everywhere, with no specialization, or a low tech one. Every country has had, more or less, the same opportunities to develop themselves. Now we enter a new era. The gap between developed nations and developing ones, is everyday larger. It's technology, services, specialization, high tech industries, education and the Internet the turning points in this new era. And I'm not saying anything new. The more people with access to technology, information, services, and communications the wealthier the country becomes. And more developed, in general terms. Where do you think is Linus from? Finland; Cox? UK; Stallman? US. I know you see the path. As a non-profit project, the zero-cost Linux ISP, the government can concede tax deductions to the funds private companies and/or individuals give to the project. Even the same government could help fund the project, due to the importance of the results. It's not just Linux; it's the enlightening of the population by means of Linux, and the long run results it is going to bring.
2.7. United Nations: (please help me on this)
2.8.
End Users donations: we
can't impose our users to pay a fee for the Internet access; if we do,
we'll just become YAISP (Yet Another ISP), and will add another level of
complexity to the project (manage subscriptions, payment, etc.). Besides,
the goal in our project is to provide an easy way for users to setup their
Internet access: they just dial 0800-LINUX after installing Linux or buying
a brand new computer. Even the distributions can have a pre-setup out of
the box with a list of countries were the 0800-LINUX project is working.
So users just will be one click away the 'Net. In this project we have
to develop all the policies and framework of the ISP, so it will be the
same all around the world. Distributions can ship already set up.
Therefore, when users want to give back to the community,
they just can donate hardware and/or funds to the project. Just with a
tiny fraction of what they pay annually to their respective ISPs and/or
Phone MonopoLIES it will be enough.
If you agree that a zero-cost Linux-only ISP can be beneficial for the growth of Linux, how do we as a community address the points I made above about creating such a project? I think that as a first step we should create a mailing-list and run a poll to know the percentage of the Linux users in our country that use dial-up Internet access.
Is a Linux-only free ISP project even possible? The first thing one could think when reading about this project is that it is going to cost too much money. OK. You have a point. But think it this way: if we raise the necessary money to have a 0800-LINUX ISP in our country, do you think it is worth it? We have plenty of choices, and reasons, to find funds.
We have to find all these answers together. This is a project that must be born inside the community, not imposed from the outside. After we find consensus, we must prepare a complete proposal to all the Linux related companies, to know how much funding we can get.
And for the technical details of the ISP we could create an "Engineering Task Force". Please, email me at carlos.betancourt@chello.be if you believe in the plausibility of this project and would like to participate.
LINKS
Hotmail
Cracked Badly
UK Decryption Law Pushed Through
'Echelon Study' Released by European Parliament
Hotmail Hole Exposes Users to Security Exploits
March 24th: