|
Table of Contents:
|
||||||||||
Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Michael "Alex" Williams, Don Marti, Ben Okopnik
TWDT 1 (gzipped text file) TWDT 2 (HTML file) are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version. | |||
This page maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright © 1996-2000 Specialized Systems Consultants, Inc. | |||
The Mailbag!Write the Gazette at gazette@ssc.com |
Contents: |
Send tech-support questions, answers and article ideas to The Answer Gang <linux-questions-only@ssc.com>. Other mail (including questions or comments about the Gazette itself) should go to <gazette@ssc.com>. All material sent to either of these addresses will be considered for publication in the next issue. Please send answers to the original querent too, so that s/he can get the answer without waiting for the next issue.
Unanswered questions appear here. Questions with answers--or answers only--appear in The Answer Gang, 2-Cent Tips, or here, depending on their content.
Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there.
No unanswered 'help wanted' letters this month.
Fri, 3 Nov 2000 09:39:41 -0000
From: Arthur G S Wilkinson <agswilk@servalan.org>
Subject: LG FTP listings have bogus "@" signs in them
I have noticed that the Linux Guides FTP site at ftp://ftp.ssc.com returns the directory listing in a format which appears garbled in some versions of Microsoft Internet Explorer.
Using the Windows command line FTP program the unix user and group ID's appear with @'s in them this appears to confuse IE.
Can anything be done about this?
[This was an artifact of our upgrade of wu-ftpd from 2.6.0 to 2.6.1 following advisories against version 2.6.0. As near as I can tell, "@" in the directory listings resulted from a defect in this new version of wu-ftpd. For this reason and because wu-ftpd was now experiencing segvs, indicating possible buffer overflows or memory allocation problems, we've retired it in favor of a relative newcomer, muddleftpd: http://www.arach.net.au/~wildfire/muddleftpd/mailing.html From the netnoise I've found so far, this daemon is well-recommended. The configuration is very simple and covers our needs nicely. Take a look and give us some feedback if you like. -Dan.]
Sun, 5 Nov 2000 11:19:15 EST
From: <Yestonight@cs.com>
Subject: suggestion
would help if content had a concise statement of what each article holds. sentence could appears only when cusor passes over.
[Regarding the first part (a concise abstract for each article), we'll consider that the next time we revise the Gazette's layout. The current Table of Contents doesn't have room for it, and we really want all the article links visible with as little scrolling as possible.What should the concise statement contain that isn't already in the title? I try to make the title as descriptive as possible, so that readers will not miss an article about something they're concerned about simply because they didn't realize the article would be about that.
Regarding the second part (making the sentance appear only when the cursor passes over it) would require Javascript, and we have preferred to keep the site free of Javascript, style sheets, etc--anything which might cause problems for some browsers. Perhaps in the future we'll revisit the question of Javascript now that it has a browser-neutral standard (ECMAscript).
Thu, 9 Nov 2000 16:36:35 -0600
From: THE MAGE <mage1@hehe.com>
Subject: Getting all the FTP files in one file
Dear editor, I would like to know if there is any way I could download all the issues in HTML format within a tar.gz or .zip file. I know that I could download each issue alone,but it would be very helping if you could tell me a way to download all the magazine's issues together.
[There is no single file that contains all the issues. However, you can have a program download all the files at once without human intervention.
- ftp
binary prompt mget *Do the prompt command once or twice until it says "Interactive mode off". This prevents it from asking whether to download each file.- ncftp
get *- rsync
- See http://www.linuxgazette.com/faq/index.html#rsync
- mirror
- I don't know the options...
I personally would use ncftp for a one-time download, or rsync to set up something which would regularly via cron, or rsync on demand via a simple shell script. The beauty of rsync is that it downloads only the portions of files that have changed, saving time and bandwidth, especially if your Internet access is expensive. -Mike.]
Fri, 10 Nov 2000 19:29:16 -0500
From: Andy Kinsey <ak47@pioneeris.net>
Subject: Kudos
Just a note regarding one of your 2-cent tip submissions:
I attempted to perform the 2-cent tip, from the March 2000 Linux Gazette that places a weather screen on the desktop. I was having difficulty, so I e-mailed the author, Matthew Willis. Matt not only replied quickly to my question, but suggested a way to fix the problem, which worked. Thanks to Matt's assistance (which he did not have to do), I discovered the problem and learned something new in the process. Matt is a credit to Linux Gazette, and I'll be looking forward to many more tips from him and others like him.
Sun, 12 Nov 2000 01:16:01 EST
From: Mike Cathcart <mike_cathcart@hotmail.com>
Subject: dmesg explained
I just finished reading the article
'dmesg explained'. Good article, although
I thought you might like to know that some of the excerpts from dmesg that are
shown are not visible in Konqueror. Basically, any excerpt that did not include
a <BR>
tag are not rendered. This can be fixed by adding a to the end of
those excerpts, which will not change the appearance in other browsers. I'll
be filing a bug report with
kde.org, but I thought you might want to 'fix'
the page in the meantime.
You mean all the <PRE>
blocks need a
<BR>
just before the </PRE>
? Or they
need it on every line?
Actually, they just need a <BR>
anywhere inside the
<PRE>...</PRE>
, it doesn't really matter where or how
many. Kinda weird, but that seems to do it.
[I added a<BR>
tag inside the manual page excerpt. Does it look all right in Konqueror?I'm not interested in putting
<BR>
tags in other articles, for this browser bug. I suppose if it were Netscape or IE, I'd have to. -Mike.]
Sun, 12 Nov 2000 01:16:01 EST
From: BanDiDo <bandido@drinkordie.com>
Subject: Kudos for LG
LG is awesome, if you charged for it I would subscribe. When I get some free time one of these I hope to pen a few articles and such.
With appreciation for a fine publication
BanDiDo
Thanks. Linux Gazette was established as a free zine and we firmly intend to keep it that way. There are already paid magazines out there (we publish one of them :), but LG fills a unique niche. No other e-zine I know of (Linux or otherwise) is read, not just through a single point of access, but in large part via mirrors or off-line (via FTP files, CD-ROMS, etc).
Also, because LG's articles are written by our readers, you (readers) are truly writing your own magazine. I only put things together and insert a few comments here and there, and occasionally write an article. If it weren't for our volunteer authors, there would be no Linux Gazette. When I first took over editing in June 1999, I used to wonder every month whether there would be enough articles. But every month my mailbox magically fills with enough articles not just for a minimal zine (5-10 technical articles), but for a robust zine with 15+ articles covering a variety of content (for newbies and oldbies, technical articles and cartoons). A year ago, we never predicted there would be cartoons in the Gazette, but the authors just wrote in and offered them, and it's been a great addition. It is truly a privilege to work with such a responsive group of readers, and years from now when I'm retired (hi, Margie!), I'm sure I will remember fondly what an opportunity it was.
Our biggest thanks go to The Answer Gang, especially Heather and Jim, who each spend 20+ hours a month _unpaid_ compiling The Answer Gang, 2-Cent Tips and The Mailbag. This has really made things a lot easier for me.
We look forward to printing some articles with your name on them. See the Author Info section at http://www.linuxgazette.com/faq/index.html#author
And you other readers who haven't contributed anything yet, get off your asses and send something in! Write a letter for the Mailbag, answer a tech-support question, join The Answer Gang, do a translation for our foreign-language sites, or write an article. What do *you* wish the Gazette had more of? *That's* what it needs from you.
Would be lovely if you guys established an EFNET irc channel :)
Contents: |
Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release.
The December issue of Linux Journal is on newsstands now. This issue focuses on System Administration. Click here to view the table of contents, or here to subscribe. All articles through December 1999 are available for public reading at http://www.linuxjournal.com/lj-issues/mags.html. Recent articles are available on-line for subscribers only at http://interactive.linuxjournal.com/.
OREM, UT-November 1, 2000: In a follow up to last month's item, Caldera Systems announced its Linux management solution (formerly code-named "Cosmos") has been named Caldera Volution. The product, currently in open beta, is still available for download from Caldera's Web site at www.calderasystems.com/beta Volution is a browser- and directory-based management product for Linux systems that utilizes the strengths of LDAP directories. Using Volution, network administrators can create policies and profiles to manage a half dozen or thousands of Linux systems, without having to individually manage/touch each.
OREM, UT-November 6, 2000: Caldera Systems, Inc. announces its upcoming Linux/Unix Power Solutions Tour 2000 which runs from November 14th through December 12th. The 12-city tour targets those who develop and deploy on Linux and Unix-including VARs, ASPs, ISVs, developers, resellers, consultants and corporate IT professionals. This tour presents Caldera's vision of the future for Linux and UNIX and Linux training. Each presentation on the tour includes two sessions: a morning business briefing and an afternoon Linux Essentials course with hands-on training, including for-sale software and solutions guides. You can get more details from www.calderasystems.com/partners/tour, or call toll-free on 1-866-890-8388.
Storm sent us some links which may be of interest to those wanting to find out about this distribution...
Courtesy Linux Journal. For the latest updates, see LJ's Industry Events page.
USENIX Winter - LISA 2000 | December 3-8, 2000 New Orleans, LA www.usenix.org |
Pluto Meeting 2000 | December 9-11, 2000 Terni, Italy meeting.pluto.linux.it |
LinuxWorld Conference & Expo | January 30 - February 2, 2001 New York, NY www.linuxworldexpo.com |
ISPCON | February 5-8, 2001 Toronto, Canada events.internet.com |
Internet World Spring | March 12-16, 2001 Los Angeles, CA events.internet.com |
Game Developers Conference | March 20-24, 2001 San Jose, CA www.cgdc.com |
CeBit | March 22-28, 2001 Hannover, Germany www.cebit.de |
Linux Business Expo | April 2-5, 2001 Chicago, IL www.linuxbusinessexpo.com |
Strictly e-Business Solutions Expo | May 23-24, 2001 Location unknown at present www.strictlyebusinessexpo.com |
USENIX Annual Technical Conference | June 25-30, 2001 Boston, MA www.usenix.org |
PC Expo | June 26-29, 2001 New York, NY www.pcexpo.com |
Internet World | July 10-12, 2001 Chicago, IL events.internet.com |
O'Reilly Open Source Convention | July 23-26, 2001 San Diego, CA conferences.oreilly.com |
LinuxWorld Conference & Expo | Conference: August 27-30, Exposition: August 28-30, 2001 San Francisco, CA www.linuxworldexpo.com |
Linux Lunacy Co-Produced by Linux Journal and Geek Cruises | October 21-28, 2001 Eastern Carribean www.geekcruises.com |
Toronto, ON - October 31, 2000: A joint agreement has been announced between Ottawa-based OEone and Tatung Co. of Canada. The two companies will be working together to bring fully-integrated, Linux-based Internet Computer solutions to leading OEM customers. The core of this deal is an exclusive arrangement between the two parties to fully integrate OEone's Linux-based Operating Environment and web applications with Tatung's All-In-One plus additional custom computer designs.
The agreement also allows the more than 200 SGI open source operating system support technicians to join the ePeople marketplace to provide fee-based Linux support to anyone who needs it. SGI also offers Web-based service incident packs, called WebPacks, from its Online Helpdesk. WebPacks are prepaid service agreements available in quantities of 5, 10 or 20 incidents, (e.g. a 5-incident WebPack costs $449 (U.S. list)).
Our Editor, Mike, recommends looking up GLUE (Groups of Linux Users Everywhere) for anyone interested in finding like-minded individuals in their area, or in publicising new groups.
November 15, 2000 (Las Vegas, Nevada): Further to their existing support for Linux, IBM are now joining the KDE League, and integrating their Via Voice technology into KDE. IBM's ViaVoice is currently the only voice recognition software commercially available for the Linux operating environment.
The KDE League is a group of industry leaders and KDE developers formed to focus on facilitating the promotion, distribution and development of KDE. The League will not be directly involved in developing the core KDE libraries and applications, but rather will focus on promoting the use of KDE and development of KDE software by third party developers.
SAN FRANCISCO Nov. 7, 2000: Linuxcare and Eazel announced a partnership geared toward speeding Linux development.
Under the agreement, Linuxcare will provide email support services to customers of Eazel's Network User Environment which includes Eazel Services and the Nautilus client for Linux systems which can be downloaded at www.eazel.com. Linuxcare will also maintain a Linux knowledgebase support site at Eazel.com by capturing documentation and software updates, as well as managing and updating support FAQs. Linuxcare's services will support the preview of Eazel's Internet services and Nautilus client that is being integrated with the GNOME 1.4 windowing system.
Training Pages runs entirely on open source software, including the Linux operating system, the Apache web server, the MySQL database and the PHP scripting language.
Version 0.9.2 of the Computer History Graphing Project has now been released. The project aims to graph all of computerdom in one large family tree. This version contains an updated version of the unified parser program, parsech. It can now optionally output a DBM hash containing the parsed data. The documentation has also been updated, in addition to the data trees. More specifically, the NeXT, Palm, Windows, and Apple Darwin trees have all been updated. The project's web site is located at comp-hist.sourceforge.net.
Adobe beta tested a Linux version of FrameMaker, then decided not to release a product. Linux Weekly News speculates why.
Is the Internet in China, rather than heralding an age of open communication, actually solidifying Big Brother's control? Linux Journal author Bryan Pfaffenberger argues so in his web article The Internet in China.
Tips on getting that darned mouse wheel to scroll under X.
Links from The Duke of URL:
Open source software is NOT free! Rick Lehrbaum examines the hours and sweat invested by individuals and companies in creating, testing, and maintaining it.
The open-source life The author of an important development tool for Linux programmers, writes about his project (the LTT) and complex GPL copyright issues.
The coders' collective: FreeDevelopers.net is seeking to reinvent the way companies produce software. Is the world ready for a democratically-elected software company?
Open-source rumble? A look at the rivalries in the open source community, and the effect on software development.
MSNBC have a good report from COMDEX 2000, focusing on the rise of embedded Linux systems.
A look at Gnutella, and possible legal implications.
Open-source developer's agreement (clauses for the contract between developers and their employers)
Slashdot review of a book explaining the Open Source revolution to non-tekkies.
From Linuxworld, an article alleging that MS is using Linux code in the latest Windows versions to make their product more stable.
Traceroute Java Servlet sources (under Linux) are now available for free downloading from the http://cities.lk.net/trdownload.html
Somers, NY, November 6, 2000 . . . IBM today announced the industry's first Linux-based integrated software solution for small businesses. It delivers the tools necessary to help customers with messaging and collaboration, productivity, Web site creation and design, and data management. IBM also includes a fully integrated install program.
"This offering provides small businesses, and the Value-Added Resellers (VARs) and Independent Software Vendors (ISVs) that serve them, everything they need to do serious e-business on Linux.," said Scott Handy, director, Linux solutions marketing, IBM Software. "The IBM Small Business Suite is first-of-a-kind for Linux and delivers the three most requested servers: database, e-mail and Web application server software, delivering a great solution at a great price."
The suite is available for US$499 at www.ibm.com/shopibm. Site licenses are also available. Supported distributions include Caldera, Red Hat, SuSE and TurboLinux. The installer program and desktop software are available in ten European and Asian languages.
The Small Business Suite for Linux includes the following software:
Product details are available at www.wolfram.com/products/mathematica/newin41.
Omnis Software has announced the release of Omnis Studio 3.0, the latest version of their 4GL rapid application development (RAD) program. The new release incorporates extensive changes to their web server and Web ClientT technologies, significantly speeding up web-based business applications. It also includes a range of other enhancements to make the development experience more intuitive, easier to use, and more powerful.
Omnis Studio is a high-performance visual RAD tool that provides a component-based environment for building GUI interfaces within e-commerce, database and client/server applications. Development and deployment of Omnis Studio applications can occur simultaneously in Linux, Windows, and Mac OS environments without changing the application code.
A demonstration copy of Omnis Studio 3.0 can be downloaded from the web site: www.omnis.net and more details of the new version are available at: www.omnis.net/v3.
The reliability of tape device technology today is extremely high, but the potential for errors on the tapes following the writing of the archive can still occur in some cases. BRU has the ability to effectively detect and recover from errors when reading a tape to allow successful completion of the restore.
Evolocity cluster systems include computational hardware, ClusterWorX(TM) management software, RapidFlow(TM) 10/100 and Gigabit Ethernet Switch, applications, and storage, including the BRU backup utility.
Tustin, California - November 9, 2000: Loki Software, Inc., publisher of commercial games for the Linux operating system, announces an agreement with Sierra Studios(tm) to bring the highly-anticipated Tribes(tm) 2 to Linux.
Loki is porting this first-person action game alongside the Windows development, and is now accepting beta tester applications for the Linux version. Interested participants should visit www.lokigames.com and complete an online registration form.
A new release of the `Mahogany' e-Mail and News client has been made. Mahogany is an OpenSource cross-platform mail and news client. It is available for X11/Unix and MS Windows platforms, supporting a wide range of protocols and standards, including POP3, IMAP and full MIME support as well as secure communications via SSL. Thanks to its built-in Python interpreter it can be extended far beyond its original functionality.
Source and binaries for a variety of Linux and Unix systems are available at http://mahogany.sourceforge.net/ and http://sourceforge.net/projects/mahogany/
Binaries for Win32 systems and Debian packages will also be made available shortly.
The latest beta of Opera for Linux is available at Opera.com.
Loki Software, Inc. and QERadiant.com are pleased to release GtkRadiant 1.1 beta for Linux and Win32. GtkRadiant is a cross-platform version of the Quake III Arena level editor Q3Radiant. GtkRadiant offers several improvements over Q3Radiant and many new features.
For more information, please visit http://www.qeradiant.com/gtkradiant.shtml.
LAS VEGAS - November 15, 2000: AbsoluteX, a new Open Source development toolkit, was officially launched at COMDEX 2000 in Las Vegas, Nevada, USA. AbsoluteX is an X-Window developer toolkit created by Epitera ( http://www.epitera.com ) to streamline and facilitate the process of developing customized GUIs (graphical user interfaces) for Linux. It is available for free download at ( http://www.absolutex.org ). Epitera believes AbsoluteX will help get Linux out of the exclusive IT world and into the mainstream desktop world of home, work and novice users.
Integrated Computer Solutions, Inc. have announced the first port of Motif to the upcoming IA64 platform from Intel. ICS says that this is important for the Linux community, because most of the existing Enterprise applications written for UNIX platforms (e.g., Suns, HP, SGI, etc.) use Motif as a GUI toolkit. Without the port of Motif to the IA64, it will be difficult and expensive for Enterprises to migrate to Linux.
A full press release is available. The software is also available for download from: http://www.motifzone.net/download/dldform.php
Hello, everyone, and welcome to issue 60... that means the whole 'zine has been here for 5 years now? That's just amazing to me. In fact I'm coming up on 3 years as the HTML wizardess for TAG in only a few months. Y2K is almost over and all the usual questions are still here. The only thing different is that politics grow more boneheaded every year I don't care. I have my own plans for the season - what a fun Xmas this is going to be!
This seems to be the season that I get to help my friends who are only now getting into Linux (computing at all, in one case) get themselves all tucked in and snug in their distros. With any luck enough of you out there are doing the same, and we'll see a new blush on some HOWTOs in the LDP project which have gotten a bit dusty. (If you might want to work on that, see the thread "newbie installation question" below) For one of these pals, I'm not even sure which distro we are going to end up using... only that she can't bear to see a poor old 486 trapped under the yoke of Redmond any longer... (For a dissertation on selecting distros, see the thread "Best Linux Distro for a Newbie" where I recycled more than my fair share of electrons babbling about it.)
We just got a sweet little toy for ourselves here in the Starshine network, specifically, an NIC (New Internet Computer) from ThinkNIC.com. It comes with a CD, Linux based, and you just plug it in (power, modem or ether, it comes with speakers and keyboard), add a monitor and off you go. Errr, it didn't like our really old VGA monitor. I wonder just how long it's been since any of our machines have used that monitor for graphics at all... um, where was I? Oh yeah. It took a little while to find ssh and VNC in there, but it's a pretty useful setup. Nonetheless, we're going to see if we can run any other CD based distros on it too. This will make for hours of fun.
Now I suppose it's possible that you would be thinking of candied yams and duck dinners and the large fellow with the sack of toys about now. In our household it's more likely to be Bootable Business Card stocking stuffers (er, after we shave the contents down a bit - the RW business cards Jim got me are a bit small - but I'm sure you can find a dozen places selling 'em if you go to Google with the search keys business card and CDRW. Depending on the nerdiness factor in your household, the CDRW's might make a great stuffer even if left blank
As for the meal of the season, since Jim and I are heading out to LISA 2000 in New Orleans, the annual sysadmin's conference, we are going to enjoy some jazz and jambalaya. We'll also have a chance to hear Illiad as a keynote speaker. BOFH meets Dust Puppy? Oh my. This is gonna be fun...
Wherever the season takes you, and whatever it happens to bring, remember
we're all here to make Linux a little more fun!
From Caldera
As a followup to the LDAP discussions that have been answered here:
Caldera Systems' Linux management solution (formerly code-named "Cosmos") has been named Caldera Volution. The product, currently in open beta, is available for download from Caldera's Web site at
http://www.calderasystems.com/beta
More details can be found in our News Bytes (Distribution section).
Answers by: Dmitriy M. Labutin, César A. K. Grossmann, Niek Rijnbout
Hi,
You can dump NT event log with dumpel utility (it comes with WindowsNT Resource kit) into flat file.
Cheers
[Cesar] To do this I must "be" in the NT computer. Not a thing I can schedule a crontab at the Linux box to do it. I was thinking in some utility I can use to dump the log remotely, from the Linux box, where I have some freedom and tools to do nasty things such as reporting unusual activities from the users...
- [Nick] See
- http://www.eventreporter.com/en
...for a $25 application to send the NT log to a syslog host.
Regards
The app Nick mentions also appears to deal well with Win2000 and offers email as well as syslog transfer of the events. -- Heather
From Juan Pryor on Tue, 7 Nov 2000
Answered by: Heather Stern
I'm pretty new to Linux and I was wondering if there is a way in which I
can have two OSes working at the same time. I mean, I've had some trouble with the people at my house since they want to go back to Win98 and I only have one PC. Is there any win98 program that reboots and starts in Linux and then when the computer reboots it starts in win98 again? Any help will do.
Juan,
It's very common for Linux users to have their systems setup as dual-boot, sometimes up in MSwin, sometimes running Linux. Some distributions even try to make it easy to turn a box which is completely Windows into a half and half setup (or other divisions as you like).
There is a DOS program named LOADLIN.EXE which can easily load up a Linux kernel kept as a file in the MSwin filesystem somewhere - my friends that do this like to keep their Linux parts under c:\linux so they can find them easily. Loadlin is commonly found in a tools directory on major distro CDs. Of course, you do have to let Windows know that Loadlin needs full CPU control. In that sense, it's no different than setting up a PIF for some really cool DOS game that takes over the box, screen and all. Anyways, there's even a nice GUI available to help you configure it, called Winux, which you can get at http://www.linux-france.org/prj/winux/English ... which, I'm pleased to add, comes in several languages.
It's also possible to setup LILO so that it always prefers to boot MSwin (the option is often called 'dos') instead of Linux... in fact, I recommend this too, unless you want to not be able to boot Linux from anything but a floppy if MSwin should happen to mangle its drive space too far.
Now this is kind of different from "two OSes working at the same time"... It is possible to run VMware, and have a couple of different setups running together, but doing this might be rather confusing to family who are not used to anything but Windows. They might accidentally hit some key combination that switches to the other environment that's running, and think they broke something even if it's all running perfectly.
To finish off - it's also possible to find really friendly boot managers; I've been looking over one named GAG (don't laugh, it's just initials for Spanish words meaning "Graphical Boot Manager") that looks like it might be fun, at http://www.rastersoft.com/gageng.htm. It was just updated, too. Anyways, it can boot up to 9 different choices and has nice icons to use for a lot of different OSs you may have on a system. Unlike LILO and some other boot managers that only replace the DOS "master boot record", though, it takes over a fair chunk of track 0.
From Michael Lauzon to tag on Tue, 14 Nov 2000
Answers by: Dan Wilder, Ben Okopnik, Heather Stern
I am wondering, what is the best Linux distro for a newbie to learn on (I have been told never to ask this question or it would start a flame war; I of course don't care)...so in your opinion: what is the best Linux distro for a newbie?
--- Michael Lauzon
[Dan] <troll>
Slackware. Beause by the time you really get it installed and running, you know a lot more about what's under Linux's hood, than with any other common distribution!
</troll>
--
Dan Wilder
Darn those trolls anyway. They're eating the dahlias now!
[Ben] <Grumble> Sure, you don't care; we're the ones that need the asbestos raincoats![Heather] Well yeah, but I usually put out the flame with a Halon cannister labelled "waaay too much information." It does make me popular in the mailing lists though.[Ben] Spoilsport.
[Ben] To follow on in the spirit of Dan's contribution:
<Great Big Troll With Heavy Steel-Toed Boots>
Debian, of course. Not only do you get to learn all the Deep Wizardry, you get all the power tools and a super-easy package installer - just tell it which archive server you want to use, and it installs everything you want!
</GBT>
(The Linux Gazette - your best resource for Linux fun, info, and polite flame wars...
[Heather] Of course it helps if you know which archive server you want to use, or that the way to tell it so is to add lines to /etc/apt/sources.list ...[Ben] Oooh, are you in for a pleasant surprise! (I was...) These days, "apt" (via dselect) asks you very politely which server you want to use, and handles the "sources.list" on its own. I still wish they'd let you append sources rather than having to rewrite the entire list (that's where knowing about "/etc/apt" comes in handy), but the whole "dselect" interface is pretty slick nowadays. It even allows you to specify CD-based (i.e., split) sources; I'm actually in the process of setting up Debian 2.2 right now, and my sources are a CD-ROM and DVD drive - on another one of my machines - and an FTP server for the "non-free" stuff. Being the type of guy who likes to read all the docs and play with the new toys, I used "tasksel" for the original selection, "dselect" for the gross uninstallation of all the extraneous stuff, and "apt-get" for all subsequent install stuff. It's worked flawlessly.[Heather] I did write a big note on debian-laptops a while back about installing Debian by skipping the installer, but I think I'll let my notes about the handful of debian based distros stand.[Ben] I agree with your evaluation. It's one of the things I really like about Debian; I was able to throw an install onto a 40MB (!) HD on a junk machine which I then set up as a PostScript "server", thus saving the company untold $$$s in new PS-capable printers.[Heather] There is rpmfind to attempt to make rpm stuff more fun to install, but it's still a young package. I think the K guys have the right idea, writing a front end that deals with more than one package type.[Ben] Yep; "alien" in Debian works well, but I remember it being a "Catch-22" nightmare to get it going in RedHat. I've got package installation (whatever flavor) down to a science at this point, but it could be made easier.
[Heather] It's really a matter of requirements analysis. Most of the flame wars arise from people stating their own preferences, and fussing over those instead of trying to figure out which would work best for you.
Learning linux is a big definition, some people mean learning the unixlike features that they've never encountered before; some people mean learning to use the same things in Linux that they already know how to use in other systems. These are, to say the least, rather opposite needs...
If you want to goof off learning Linux but are very afraid of touching your hard drive's data, there are a few distributions designed to run off of a CD, or out of RAM. One pretty good one that runs directly from a RAMdisk is Tom's rootboot (http://www.tons.net/rb). While a lot of people use it merely as a rescue disk, Tom himself lives in it day to day. But, it's not graphical. And, it's libc5 based, so it's a little strange to get software for. It uses a different shell than most major distributions, but the same kernels. It's not exactly aimed at "just surfing the web and doing email" which I often hear newbies say that they'd be happy with. Linux Weekly News (http://www.lwn.net) has recently sorted their distributions, so you could find a CD based distro that meets these more mainstream desires fairly easily there.
If you want to learn about things from their raw parts, the way some kids like to learn about cars by putting one together themselves, there is a Linux From Scratch HOWTO stored at the LDP site (http://www.linuxdoc.org).
If the newbie's native language isn't English, he or she probably wants a localized distro, that is, one that installs and whose menus, etc. are in their language. (I'm guessing that such a newbie wouldn't be you - your .sig links were to purely English websites.) You can find a bunch of those at LWN too, but you'll have to go looking at home pages to be sure what languages are covered.
Otherwise, you probably want a "normal" linux, in other words, a major distro. Newbies generally want to be able to ask their local gurus for help, rather than wonder if some random wizard on the internet will ever answer them. If your local techie pals have a favorite, try that - they'll be better at helping you with it than stuff they don't know as well. I could be wrong of course - some techie folks prefer to learn stuff the same time you do, and you can get a great sense of energy by sometimes figuring out a thing here and there faster than they do. But by and large, gaining from someone else's experience will make things smoother, a smooth start will generally be more fun, and enjoying your first experiences will make you more willing to experiment later.
If you like to learn from a book, there are a fair number of books that are about a specific distro, and have a CD of that distro in the back. These are good, but not usually aimed at people who want to dual boot. Just so you know.
The big commercial brands usually try to push that they're an easy install. What they don't push so much is their particular specialty, the market they are aiming for. I've heard good things about Corel (esp. for dual boot plans), I've seen good things with both SuSE and Storm. Mandrake and Debian have both been a little weird to install - not too bad, but I'm experienced, and enjoy wandering around reading the little notes before doing things ... if you want the computer to be bright enough to do it all by itself, these might not be for you. (note, my Mandrake experience is a version old. And they compile everything Pentium optimized, so if things go smoothly, it will usually be a lot faster system.) Several of the brands are now pushing a "graphical installer" which is supposed to be even easier. However, if you have a really bleeding edge video card, it would also make the distro a real pain to install. Storm and RedHat favor graphical over non-graphical installs. LibraNet has a nongraphical install that still gives Debian a somewhat friendlier setup. I hear that Slackware is fairly friendly to people who like to compile their own software, and I never hear anything about their installer, so maybe it is really incredibly easy. Or maybe my friends don't want to tell me about their install woes once they get going, I dunno
If RedHat (6.2, I have to say I haven't tried 7 yet) is where you're going, and their graphical install is a bummer for you, use their "expert" mode. Their "text" mode is almost useless, and they really do have lots of help in expert mode, so it's not as bad as you would think.
In any case, I would recommend backing up your current system if there's anything on it you want to keep, not because the installs are hard - they're nothing like the days before the 1.0 kernel - but because this is the most likely time to really mangle something, and you'll just kick yourself if you need a backup after all and don't have one.
The next thing to consider is your philosophy. Do you want to be a minimalist, only adding stuff that makes sense to you (or that you've heard of), and then add more later? If so, you want a distro that makes it really easy to add more later. Debian and its derivatives are excellent for this - that includes Corel, Libranet, and Storm. SuSE's YaST also does pretty well for this, but they don't update as often... on the other hand, they don't get burned at the bleeding edge a lot, either. If most of the stuff you'll add later is likely to be commercial, RedHat or a derivative like Mandrake might be better - lots of companies ship RedHat compatible rpm's first, and get around to the other distros later, if at all.
If you have a scrap machine to play on, try several distros, one at a time; most of them are available as inexpensive eval disks from the online stores.
If you'd rather install the kitchen sink and take things back out later, any of the "power pack" type stuff, 3 CDs or more in the set, might work for you. Most of these are still based on major distros anyway, there's just a lot more stuff listed, and you swap a couple of CDs in. Umm, the first things you'll probably end up deleting are the packages to support languages you don't use...
A minimal but still graphical install should fit in a gigabyte or so - might want 2. A more thorough setup should go on 6 Gb of disk or so (you can, of course, have more if you like). It's possible to have usable setups in 300 to 500 Mb, but tricky... so I wouldn't recommend that a newbie impose such restrictions on himself.
To summarize, decide how much disk you want to use (if any!) and whether you want to go for a minimal, a mostly-normal, or a full-to-the-brim environment. Consider what sort of help you're going to depend on, and that might make your decision for you. But at the end, strive to have fun.
[Ben] Heather, I have to say that this is about the most comprehensive answer to the "WITBLD" question yet, one that looks at a number of the different sides of it; color me impressed.
WITBLD = "What Is The Best Linux Distro"
[Heather] The key thing here is that there are several aspects of a system. When one is "easiest" fo you it doesn't mean all the others are. So, you have to decide what parts you care the most about making easy, and what parts you consider worth some effort for the experience you'll get. Once you know that, you are less of a newbie already. I hope my huge note helped, anyway.
Well, I bought Caldera OpenLinux eDesktop 2.4, so I am looking for people who have had experience with OpenLinux. I still haven't installed it on a computer yet, as I need to upgrade the computer; but once I do that I will install it (though i do plan on buying other distros to try out).
--- Michael Lauzon
From vinod kumar d
Answers by: Heather Stern, Ben Okopnik
Hello I'm about to install Redhat Linux as a dual boot on my machine running win98 that came preconfig'd to use my 30 gigs all for windows, and for all the browsing i did through red hat's online docs, i could'nt figure out one basic thing: should i have an unallocated partition to begin installation, or will disk druid/fips do the "non-descructive repartitioning" as part of the install?
[Heather] I do not remember if RedHat will do the right thing here or not. CorelLinux will (in fact, made a great PR splash by being one to make this pleasant). Um, but CorelLinux is a debian-type system, not a rpm type system. I'm not sure what requirements had you pick RedHat, maybe you need something a bit more similar.
[Ben] Having recently done a couple of RH installations, I can give you the answer... and you're right, it's not the one you'd like to hear.
No, RedHat does not do non-destructive repartitioning. Yes, you do need to have another partition (or at least unallocated space on the drive) for the installation - in fact, you should have a minimum of two partitions for Linux, one for the data/programs/etc., and the other one for a swap partition (a max of 128MB for a typical home system.) There are reasons for splitting the disk into even more partitions... unfortunately, I haven't found any resources that explain it in any detail, and a number of these reasons aren't all that applicable to a home system anyway.
if i do need the unallocated partition, which is the best partition software to use cos i have stuff that i dont want to lose.
[Heather] If you feel up to buying another commercial product, PartitionMagic is very highly regarded. Not just amongst us linux-ers, but also for people who wanted to make a new D:, give half a server to Novell, or something like that. It's very smart.
It's also what comes in CorelLinux...
If you're more into Linux than MSwin and comfortable with booting under a rescue environment, I'm pleased to note that gparted (the GNU partition editor) deals well with FAT32 filesystems. Tuxtops uses that.
If you're feeling cheap, FIPS is a program that can do the drive division after booting from a DOS floppy, which you can easily make under the MSwin you already have. I'm pretty sure a copy of FIPS is on the redhat CD as a tool, so you could use that. It doesn't do anything but cut the C: partition into two parts. You'd still use disk druid later to partition the Linux stuff the way you want.
(Of course mentioning buying a preloaded dual boot from one of the Linux vendors like Tuxtops, VA Linux, Penguin, or others is a bit late. I'm sure you're fairly fond of your 30 Gb system with the exception of wanting to set it up just a bit more.)
None of these repartitioners will move your MS Windows swap file though. In the initial setup MS' is as likely to have the swap near the beginning of the drive, or the end. I recommend that you use the control panel advanced system options to turn off the swap file, and your favorite defragmenter, and then run a nice solid backup of your windows stuff before going onwards.
This isn't because Linux installs might be worse than you think (though there's always a chance) but because Windows is fragile enough on its own, and frankly, backups under any OS are such a pain that some people don't do them very often, or test that they're good when they do. (I can hardly imagine something more horrible than to have a problem, pat yourself on the back for being good enough to do regular backups, and discover that the last two weeks of them simply are all bad. Eek!) So now, while you're thinking:
"cos i have stuff that i dont want to lose."
is a better time than most!
[Ben] Following on to Heather's advice, here's a slightly different perspective: I've used Partition Magic, as well as a number of other utilities to do "live partition" adjustment (i.e., partitions with data on them.) At some point, all of these, with one exception, have played merry hell with boot sectors, etc. - thus reinforcing Heather's point about doing a backup NOW. The exception has turned out to be cheap old FIPS; in fact, that's all I use these days.
FIPS does indeed force you to do a few things manually (such as defragmenting your original partition); I've come to think that I would rather do that than let PM or others of its ilk do some Mysterious Something in the background, leaving me without a hint of where to look if something does go wrong. Make sure to follow the FIPS instructions about backing up your original boot sector; again, I've never had it fail on me, but best to "have it and not need it, rather than need it and not have it."
In regard to the Windows swap file, the best way I've found to deal with it is by running the defrag, rebooting into DOS, and deleting the swapfile from the root directory. Windows will rebuild it, without even complaining, the next time you start it.
i really tried a lot of faq's before asking you, so could you go easy if you're planning to: a) flame me about rtfm'ing first.
[Heather] Oboy, a chance to soapbox about doing documentation I promise, no flame!
If we should do this we generally are at least kind enough to say which F'ing M's to R. Which brings another thought to mind. FAQs and HOWTOs are okay, but they are sort of... dry. Maybe you could do an article for Linux Gazette about your experience, and "make linux a little more fun" (our motto) for others who are doing the dual boot install their first time out.
Unfortunately it's really sad that the FAQs and HOWTOs aren't as useful to everyone as they could be
If one of them was pretty close and just plain wasn't quite right, or wasn't obvious until you already went through it, give a shot at improving it a little, and send your notes back to the maintainer. If he or she doesn't answer you in a long time (say a month or two) let us know, maybe get together with some friends and see if you can become its new maintainer.
To be the maintainer of a Linux project doesn't always mean to write everything in it, just sort of to try and make sure it stays with the times. Linus himself doesn't write every little fragment of code in the kernel - though maybe he reads most of it :D - he maintains it, and keeps it from falling apart in confusion. This is really important. Documents need this too.
Because these things are not meant to be ground in stone, they're written to be useful and yeah, sometimes it happens that the fella who first wrote a given doc has moved on to other things. Meanwhile folks like you join the linux bandwagon every month and still need them, but Linux changes and so do the distros.
But, it's ok if you personally can't go for that. It's enough if we can find out what important HOWTOs could stand some improvement, since maybe it will get some more people working on them.
b) ignoring me totally.
[Heather] Sadly, we do get hundreds and hundreds of letters a month, and don't answer nearly that many. But hopefully what I described above helped. If it isn't enough, ask us in more detail - there's a whole Gang of us here, and some of us have more experience than others.
[Ben] Well, OK - you get off scot-free this time, but if you ever ask another question, we'll lock you in a room with a crazed hamster and two dozen Pokemon toys on crack. The Answer Gang in general seems to have taken its mandate from Jim Dennis, the original AnswerGuy: give the best possible answers to questions of general interest, be a good information resource to the Linux community, and eschew flames - incoming or outgoing. <Grin> I like being part of it.
btw really liked your answers in the column (well
here's hoping some old fashioned flattery might do the
trick
)
thanks in advance...
vinod
[Heather] Thanks, vinod. It's for people like you (and others out there who find their answer and never write in at all) that we do this.
[Ben] If you scratch us behind the ears, do we not purr? Thanks, Vinod; I'm sure we all like hearing that our efforts are producing useful dividends. As the folks on old-time TV used to say, "Keep those letters and postcards coming!"
From David Wojik
Answered by: Heather Stern, Paul MacKerras
I need to modify the PPP daemon code to enable dynamic requests to come in and renegotiate link parameters. I also need to make it gather packet statistics. Do you know of any textbooks or other documentation that explain the structure of the PPP protocol stack implementation? The HowTos only explain how to use Linux PPP, not how to modify it.
Thanks,
Dave
[Heather] Once the ppp link is established, it's just IP packets like the rest of your ethernet, so you should be able to get some statistics via ifconfig or other tools which study ethernet traffic, I'd think.
Still, renegotiating the link sounds interesting (I'm not sure I see what circumstances should cause it ... your modem renegotiating a speed is not at all the same thing). Anyways, if for some reason the source code of the PPP daemon itself isn't enough, your best bet would probably be to start a conversation with Paul Mackerras, the ppp maintainer for Linux. After all, if you really need this feature, there are likely to be others out there who need it too. I've cc'd Paul, so we'll see what he has to say.
Hi Heather,
Thanks for responding so promptly. My problem is that the product I'm working on uses Linux PPP to communicate between routers not modems. My software needs to be able to do things dynamically like take down the link, start an echo test, or change the mru.
[Heather] It sounds like you want to create a router-handler to do that part, that looks like a serial interface as far as the ppp functions are concerned. Then, these can remain seperated off.
The PPP protocol provides for dynamic renegotiation of link parameters but since Linux PPP was written primarily for modems connecting to ISPs, the PPP daemon is designed to take all of the parameters on the command line when it is invoked; after that it locks out any new input. My software also needs to count all of the different LCP packet types (Config-Ack, Config-Nak, etc.) and provide an interface to retrieve them.
[Heather] And logically the router-handler would do these too? (Sorry, I'm not up on whether these are internal to the PPP protocols, they look like higher level stuff to me.)
The PPP Protocol Stack implementation consists of thousands of lines of code. So what I am hoping to find is some high level documentation that will help me to determine how to modify only the parts I need. Even better would be to find some software that already does this as you suggest.
[Heather] Hmm. Well, best of luck, and we'll see if Paul can point us to something good.
Thanks again,
Dave
[Paul] David,
As you say, the Linux pppd doesn't currently let you change option values and initiate a renegotiation (not without stopping pppd and starting a new one). It should however respond correctly if the peer initiates a renegotiation. I have some plans for having pppd create a socket which other processes can connect to and issue commands which would then mean that pppd could do what you want. I don't know when I'll get that done however as I haven't been able to spend much time on pppd lately. As for counting the different packet types, that wouldn't be at all hard (you're the first person that has asked for that, though).
-- Paul Mackerras, Senior Open Source Researcher, Linuxcare, Inc.
Linuxcare. Support for the revolution.
Between Bryan Henderson and Mike Orr
In answering a question about the role of an ISP in making one's cable-connected computer vulnerable to hackers, Mike Orr makes a misstatement about the Internet that could keep people from getting the big picture of what the Internet is:
The cableco or telco connects you to your ISP through some non-Internet means (cable or DSL to the cableco/telco central office, then ATM or Frame Relay or whatever to the ISP), and then the ISP takes it from there. Your ISP is your gateway to the Internet: no gateway, no Internet.
[Bryan] The copper wires running from my apartment to the telephone company's central office are part of the Internet. Together with the lines that connect the central office to my ISP, this forms one link of the Internet.
The Internet is a huge web of links of all different kinds. T3, T1, Frame Relay, PPP over V.34 modem, etc.
The network Mike describes that all the ISPs hook up to (well, except the ones that hook up to bigger ISPs), is the Internet backbone, the center of the Internet. But I can browse a website without involving the Internet backbone at all (if the web server belongs to a fellow customer of my ISP), and I'm still using the Internet.
I would agree that you're not on the Internet if you don't have some path to the Internet backbone, but that path is part of the Internet.
[Mike] It depends on how you define what the Internet "is". My definition is, if a link isn't communicating via TCP/IP, it's not Internet. (IP isn't called "Internet Protocol" for nothing.) This doesn't mean the link can't function as a bridge between Internet sites and thus hold the Internet together.
Internet hops can be seen by doing a traceroute to your favorite site. The listing doesn't show you what happens between the hops: maybe it's a directly-connected cable, maybe it's a hyperspace matter-transporter, or maybe it goes a hundred hops through another network like ATM or Frame Relay or the voice phone network. Traceroute doesn't show those hops because they're not TCP/IP--the packet is carried "somehow" and reconstructed on the other side before it reaches the next TCP/IP router, as if it were a direct cable connection.
Of course communicating with another user at your ISP is "Internet communication", provided the ISP is using TCP/IP on its internal network (as they all do nowadays, not counting a parallel token ring network at an ISP I used to work at, where the mailservers were on the token ring). And of course, the distinction is perhaps nitpicky for those who don't care what precisely the network does as long as it works.
[Bryan] I'm with you there. But the link between my house and my ISP (which is quite ordinary) is TCP/IP. I have an IP address, my ISP's router has an IP address and we talk TCP/IP to each other. In the normal case that my frame's ultimate destination is not the router, the router forwards it, typically to some router in the backbone. Traceroute shows the hop between my house and the ISP.
All of this is indistinguishable from the way frames get from one place to another even in the heart of the Internet.
The layers underneath IP might differ, as you say, but you seem to be singling out protocols used in the home-ISP connection as not real TCP/IP, whereas the links between ISPs are real TCP/IP. There's no material difference between them. If not for the speed and cost disadvantage, the Internet backbone could be built on PPP over 28.8 modems and POTS lines.
One way we used to see that the home-ISP connection really _wasn't_ the Internet was AOL. You would talk AOL language to an AOL computer which was on the Internet and functioned as a gateway. The AOL computer had an IP address but the home computer did not. But now even AOL sets up an IP link between the AOL computer and the home computer. It's via a special AOL protocol that shares the phone line with non-IP AOL communications, but it's an IP link all the same and the home computer is part of the Internet whenever AOL is logged on.
From Shane Welton
Answered by: Ben Okopnik, Heather Stern, Mike Orr
As you know the world has gone wild for Linux, and the company I work for is no acception. We work with classified data that can be some what of a hassle to deal with. The only means of formatting a hard disk is the analyze/ format command that comes with Solaris. That method has been ensured as declassification method.
{Ben] Actually, real low-level formats for IDE hard drives aren't user-accessible any more: they are done once, at the factory, and the only format available is a high-level one. This does not impact security much, since complete data erasure can be assured in other ways - such as multiple-pass overwrites (if I remember correctly, a 7-pass overwrite with garbage data is recognized as being secure by the US Government - but it's been a while since I've looked into it.)
I was hoping you could tell me if Linux offers a very similar low-level format that would ensure complete data loss. I have assumed that "dd if=/dev/zero of=/dev/hda" would work, but I need to be positive. Thanks.
{Ben] Linux offers something that is significantly more secure than an "all zeroes" or "fixed pattern" overwrite: it offers a high-quality "randomness source" that generates output based on device driver noise, suitable for one-time pads and other high-security applications. See the man page for "random" or "urandom" for more info.
Based on what you've been using so far, here's something that would be even more secure:
dd if=/dev/urandom of=/dev/hda
If you're concerned about spies with superconducting quantum-interference detectors <grin>, you can always add a "for" loop for govt.-level security:
for n in `seq 7`; do dd if=/dev/urandom of=/dev/hda; done
This would, of course, take significantly longer than a single overwrite.
[Mike] Wow, seven-level security in a simple shell script!
[Ben] <Grin> *I've* always contended that melting down the hard drive and dumping it in the Mariannas Trench would add just that extra touch of protection, but would they listen to me?...[Heather] Sorry, can't do that, makes the Mariannas Trench too much of a national security risk. Someone could claim that our data has been left unprotected in international waters.Or, why security is a moving target: what is impossible one year is a mere matter of technology a few years or a decade later.
[Heather] You wish.
[Mike] My point being, that a one-line shell script can do the job of expensive "secure delete" programs.
[Heather] /dev/urandom uses "real" randomness, that is, quanta from various activities in the hardware, and it can run out of available randomness. We call its saved bits "entropy" which makes for a great way to make your favorite physics major cough. "We used up all our entropy, but it came back in a few minutes."
[Ben] Hey! If we could just find the "/dev/random" for the Universe...
[Heather] When it's dry I don't recall what happens - maybe you device wait on it, that would be okay. But if you get non-randomness after that (funny how busy the disk controller is) you might not really get what you wanted...
[Ben] That's actually the difference between "random" and "urandom". "random" will block until it has more 'randomness' to give you, while "urandom" will spit up the the entire entropy pool, then give you either pseudorandomness or a repeat (I'm not sure which, actually), but will not block.
[Ben] You're welcome to experiment - by which I mean, try it and study the results, check that they're what you want or not (confirm or refute the hypothesis).
I'm not clear from the original request if they're trying to clear the main drive on a system, or some secondary data drive. If it's the main, I'd definitely want to boot from Tom's rootboot (a RAM based distro) so there'd be no chance of the system resisting getting scribbled upon, or failing to finish the job. Also continuing to multitask (Toms has 4 virtual consoles, you can read some doc files or something) will give /dev/urandom more noise sources to gather randomness from.
/dev/random would be faster - not as random, but at 7 times, it's (wince now, you know what I'm going to say) good enough for government work. MSwin doesn't have a /dev/urandom, it only has pseudorandomness. At least, last I looked.
[Ben] Again, the other way around: "urandom" would be faster but marginally less secure (after 7 overwrites? The infinitesimal difference croggles my mind...), while "random" is slower but has the true /gelt/. Given that "/dev/hda" was used in the original example, Tom's RootBoot would be an excellent idea.[Mike] I thought /dev/urandom was the faster but less random one.[Heather] I just looked in the kernel documentation (/usr/src/linux/Documentation) and you are correct. /dev/random (character major 1 minor 8) is listed as nondeterministic, and /dev/urandom (character major 1 minor 9) is listed as faster and less secure.Anyways our readers will have to decide for themselves whether they want 7 layers of pseudo-random, or if their system will be busy enough in different ways to get a nice batch of true randomness out of the "better" source.
[Heather] I hear that the i810 motherboard has a randomness chip, but I don't know how it works, so I don't know how far I'd trust it for this sort of thing.
Thanks for the help and the humor, I shall pass the information on to our FSO in hopes that this will suffice. Again, thanks.
Shane M. Walton
From Dave
Answered By: Ben Okopnik
Hello Answerguy,
Since installing Debian a few days ago, I've been more than pleased with it.
However, I have run into a wee problem which I was hoping you could help me
with. Yesterday, I realised I hadn't installed GPM. I immediately got round
to installing using apt (a lovely painless procedure when compared to RPM).
All went great until I started to run X, at which point my mouse went insane
- just flying round the desktop at its own free will every time as I so much
as breathed on the hardware that operated it. I immediately killed GPM using
the GPM -k command, but to no avail. Then I shut down X, and restarted it
with no GPM running - the mouse refused to move at all. I then proceded to
uninstall GPM, and yet the pointer remains motionless :(. I'm using a PS/2
mouse.. Any suggestions?
I thank you for your time
-Dave-
Yep; it's a bad idea to kill or uninstall GPM.
In the Ages Long, Long ago (say, 3 years back), it used to be standard practice to configure two different ways to "talk" to the mouse: GPM for the console, and the mouse mechanism built into X. Nowadays, the folks that do the default configuration for X in most distributions seem to have caught on to the nifty little "-R <name>" switch in GPM. This makes GPM pass the mouse data onto a so-called "FIFO" (a "first in - first out" interface, like rolling tennis balls down a pipe) called "/dev/gpmdata" - which is where X gets _its_ mouse info. By removing GPM, you've removed the only thing that pays any attention to what the mouse is doing.
So, what's to do? Well, you could configure X to actually read the raw mouse device - "/dev/psaux" in most computers today, perhaps "/dev/ttyS0" if you have a serial mouse on your first serial port (or even "/dev/mouse", which is usually a symlink to the actual mouse device.) My suggestion is, though, that you do not - for the same reason that the distro folks don't do it that way. Instead, reinstall GPM - in theory, your "/etc/gpm.conf" should still be there, and if isn't, it's easy enough to configure - and make sure that it uses that "-R" switch (hint: read the GPM man page.)
Once you've done all that, you'll now need to solve the "jumping mouse" problem. In my experience, that's generally caused by the mouse type being set to the wrong value (usually "PS/2" instead of "Microsoft".) Here's the easy way to do it: from a console, run "XF86Setup"; tell it to use your current configuration when prompted. Once X starts up and you get the "Welcome" screen, tab to the "Mouse" button and press "Enter". Read the presented info page carefully: since you'll be using the keyboard to set the options, you'll need to know which keys do what. If you forget, "Tab" will get you around.
Make sure that the "Mouse Device" is set to "/dev/gpmdata", and try the various mouse protocols - these are obviously dependent on your mouse type, but the most common ones I've seen have been PS/2 and Microsoft. Remember to use the "Apply" button liberally: the changes you set won't take effect until you do.
Once you have the right protocol, the mouse should move smoothly. I suggest that, unless you have a 3-button mouse, you set the "Emulate3Buttons" option - you'll need it to copy and paste in X! Also, play with the resolution option a bit - this will set the mouse response. I've seen high resolution "lock up" a mouse - but by now you know how to use that "Tab" key...
Once you're done, click "Done" - and you're ready to fly your X-fighter.
From G David Sword
Answered By; Ben Okopnik, Mike Orr
I have a text file full of data, which I would like to turn into a bunch of fax documents for automated faxing. I could simply parse the file in perl, and produce straight text files for each fax.
Instead of this, I would like to be able to build up something which resembles a proper purchase order, or remittance, containing logos, boxes for addresses etc. Could I have an expert opinion (or six) on what would be the best method to use to achieve this - I have read a bit about LaTeX and groff, but I am not sure if they are the best solution or not.
Thanks in advance
G. David Sword
[Ben] Since you have already implied that you're competent in Perl, why not stick with what you know? Parse the data file (which you will have to do anyway no matter what formatting you apply to it afterwards), then push it out as HTML - Perl is excellent for that. I can't imagine an order form so complex that it would require anything more than that.
As a broader scope issue, learning LaTeX or groff is, shall we say, Non-Trivial. In my !humble opinion, neither is worth doing just to accomplish a single task of the sort that you're describing. SGML, on the other hand, is an excellent "base" format that can be converted to just about anything else - DVI, HTML, Info, LaTeX, PostScript, PDF, RTF, Texinfo, troff-enhanced text, or plaintext (as well as all the formats that _those_ can be converted into.) You can learn enough to produce well-formatted documents in under an hour (no fancy boxes, though) - "/usr/share/doc/sgml-tools/guide.txt.gz" (part of the "sgml-tools" package) will easily get you up to speed. If you want the fancy boxes, etc., check out Tom Gordon's QWERTZ DTD <ftp://ftp.gmd.de/GMD/sgml/sgml2latex-format.1.4.tar.gz>, or the LinuxDoc DTD (based on QWERTZ.) I haven't played with either one to any great extent, but they're supposed to do mathematical formulae, tables, figures, etc.
[Mike] Let me second this. If you need to get the reports out the door yesterday, stick with what you know. Get them to print in any readable text format now and then worry about enhancements later. The code you use to extract the fields and calculate the totals will still be useful later, whether you plug it into the new system directly or convert it into a new language.
TeX and troff both have a learning curve, and you have to balance this against how useful they will be to your present and future purposes. At best, they make a better temporary "output format" nowadays than a document storage format. SGML or XML is a much better storage format because it's more flexible, given the unpredictable needs of the future.
Actually, your "true" storage format will probably remain your flat file or a database, and then you'll just convert it to SGML or XML and then to whichever print format you want (via a generic SGML-to-something tool or your own home-grown tool).
I would look at XML for the long term, even if you don't use it right away. Perhaps someday you'll want to store your data itself in XML files rather than in the text files you're using. This does allow convenient editing via any text editor, and for new data, a program can create an empty XML structure and invoke an editor on it. And as time goes on, more and more programs will be able to interpret and write XML files. On the other hand, it is darn convenient to have that data in a database like MySQL for quick ad-hoc queries...
If you just want to learn a little bit of formatting for a simple document, troff is probably easier to learn than TeX.
You can always use the "HTML cop-out" one of my typesetting friends (Hi, johnl!) tells people when they ask him what's an easy way to write a formatted resume. Write it in HTML and then use Netscape's print function to print it Postscript.
From Bob Glass (with a bonus question from Dan Wilder)
Answered by: Ben Okopnik
Hi, everyone. I'm a newbie and need help with a linux machine that goes to sleep and has to be smacked sharply to wake it up. I'm trying to run a proxying service for user authentication for remote databases for my college. That's all the machine is used for. The Redhat installation is a custom, basically complete, installation of Redhat Linux 6.2. The machine is a 9-month old Gateway PIII with 128MB of RAM. The network adapter is an Intel Pro100+. My local area network is Novell 5.x and my institution has 4 IP segments. I have not configured my linux installation beyond defining what's needed to make the machine available on the local network (machine name, hard-assigned IP address, default gateway etc).
<Snip>
The problem I'm unable to deal with is: my proxy machine disappears from the network or 'goes to sleep.' At that point, I can't use a web browser to contact the proxy service machine, I can't telnet to the machine, and I can't ping the machine. However, if I go across the room to the proxy machine, open the web browser, go to an weblink (i.e., send packets out from the machine), then go back to my computer and test a link, ezproxy responds and all is well. However, usually in an hour or so, the proxy machine is unreachable again. Then much later or overnight, it will begin to respond again, usually after a 5-7 second delay.
[Ben] First, an easy temporary fix: figure out the minimum time between failures and subtract a couple of minutes; run a "cron" job or a backgrounded script that pings a remote IP every time that period elapses. As much as I hate "band-aid fixes", that should at least keep you up and running.
Second: I've encountered a similar problem twice before. Once with sucky PPP in an older kernel (2.0.34, if I remember correctly), and one that involved a flaky network card on a Novell network (I've sworn off everything but two or three brands of cards since.) Perhaps what I'd learned from troubleshooting those may come in useful.
[Dan] If you don't mind saying, which brands have you had the best luck with under Linux?[Ben] Intel EE Pro 10/100Bs have been faultless. I've used a stack of those to replace NE2K clones, and a number of problems - some of which I would have sworn were unrelated to hardware - went away. I can't say the same for the various 3Coms I've tried; whether something in the driver software or in the cards themselves (under Linux and Windows both), I could not get consistent performance out of them. My experience with LinkSys has been rather positive, although I've never had the chance to really beat up on them; perhaps this has to do with the quality of Donald Becker's driver, as they have been very friendly to the Linux community from the start (this was the reason I decided to try playing with them in the first place.)For consistently high throughput, by the way, I have not found anything to beat the Intels.
[Ben] Note that I'm not trying to give you The One True Solution here; this seems to be one of those problems that will require an iterative approach. The way I'd heard this put before is "when you don't understand the problem, do the part that you do understand, then look again at what's left."
A good rule of thumb is that if the problem is happening at regular intervals, it's software; if it's irregular, it's hardware. Not a solution, but something to keep in mind.
I have turned off power management in the BIOS. I have stopped loading the apm daemon. I have tried a different network adapter, 3Com509b. I have even migrated away from another computer to the machine described above. And still the machine goes to sleep ...!?$#@
[Ben] When it goes to sleep, have you tried looking at the running processes (i.e., "ps ax")? Does PPP, perhaps, die, and the proxy server restart it when you send out a request? Assuming that you have two interfaces (i.e., one NIC that talks to the LAN and another that sees the great big outside world), are both of them still up and running ("ifconfig" / "ifconfig -a")?What happens if you set this machine up as a plain workstation? No proxy server, minimum network services, not used by anyone, perhaps booted from a floppy with an absolutely minimal Linux system - with perhaps another machine pinging it every so often to make sure it's still up? If this configuration works, then add the services (including the proxy server) back, a couple at a time, until something breaks.This is known as the "strip-down" method of troubleshooting. If it works OK initially, then the problem is in the software (most likely, that is: I've seen NICs that work fine under a light load fall apart in heavy traffic.) If it fails, then the problem is in the hardware: NICs have always been ugly, devious little animals... although I must admit they've become a lot better recently; I can't say that I've had any problems with Intel Pros, and I've abused them unmercifully.(A related question: When you moved from one machine to the other, did you happen to bring the NICs along? This could be important...)
[Ben] My bad, there; I missed the part about the different NIC in the original request for help, even though I quoted it (blame it on sleep- deprivation...) - ignore all the stuff about the Evil NICs; it's certainly starting to sound like software.
On Tue, Nov 07, 2000 at 11:37:46AM -0500, Bob Glass wrote: Dear Mr. Okopnik,
Thanks so much for your suggestion about creating a cron job which pings a network device. I did just that, and now the problem is 'solved.' (finding a source which detailed how to set up a cron job to run every 15 minutes _and_ not e-mail the output to the root account was a bit of a challenge!) It's a measure of what a newbie I am that this didn't occur to me on my own!
I've talked to many people about this problem and have come to the conclusion that there's a weird mismatch between hardware and software at both the machine and network level (routers, switches, NICs, Linux, Novell who knows!@#$ I wish Novell would write network clients for Linux and Solaris. I have a Solaris machine which very occasionally has this same problem.) Having tussled with this for over a month and been shown a workaround which both works and causes no problems, I'm satisfied. And as director of my library, I've got to move on to other tasks.
Again, many thanks.
Bob Glass
[Ben] You're certainly welcome; I like being able to "pay forward" at least some of the huge debt I owe to the people who helped me in my own early struggles with Linux.
Pinging the machine is a workable solution, and I'm glad that it mitigated the problem for you - but let me make a suggestion. If you do not have the time to actually fix it now (or even in the foreseeable future), at least write down a good description of the problem and the workaround that you have used. The concept here is that of a shipboard "deficiency log" - any problems aboard a ship that cannot be immediately resolved go into this log, thus providing a single point of reference for anyone who is about to do any kind of work. ("I'll just remove this piece of wire that doesn't look like it belongs here... hey, why are we sinking???") That way, if you - or another director/admin/etc. - have to work on a related problem, you can quickly refresh yourself on exactly why that cron job is there. A comment in "crontab" that points to the "log" file would be a Good Thing.
As I understand it, Caldera's OpenLinux promises full Novell compatibility/connectivity. I can't comment on it personally, since I have no experience with OpenLinux, but it sounds rather promising - Ray Noorda is the ex-CEO of Novell, and Caldera is one of his companies.
From John Hinsley
Answered by: Mike Orr
I want a web site, but it looks like I'll have to put together my own server and put it on someone's server farm because:
What do you mean by server farm? You're going to colocate your server at an ISP? (Meaning, put the server in the ISP's office so you have direct access to the ISP's network?)
I need to run Zope and MySQL as well as Apache (or whatever) in order to be able to use both data generated pages via Zope and "legacy" CGI stuff (and it's far easier to find a Perl monger when you want one rather than a Python one!). If this seems remotely sensible, we're then faced with the hardware spec of this splendid server.
I set up one Zope application at Linux Journal (http://www.linuxjournal.com/glue). It coexists fine with our Python and Perl CGI scripts.
<ADVOCACY LANGUAGE="python"> While it may be easier to find a Perl monger than a Pythoneer, us Python types are becoming more common. And anybody who knows any programming language will find Python a breeze to snap up. The programming concepts are all the same, the syntax is very comprehensible, and the standard tutorial is excellent. </ADVOCACY>
So, proposed spec:
Athlon 700, 3 x 20 GB IDE hard drives, 2 of which are software raided
together and the third of which is for incremental back up. 256 Mb of
Ram (at least), 1 100 Mbps NIC. Open SSH as a mode for remote
administration, but otherwise with a lean kernel with an internal
firewall.
Does this sound like a remotely viable spec?
You didn't say how many hits per month you expect this site to receive. Our server has less capacity than that, and it runs Linux Journal + Linux Gazette + some small sites just fine. And yes, our servers are colocated at an ISP. You get much better bandwidth for the price by colocating.
I discussed your spec with our sysadmin Dan Wilder (who will probably chime in himself) and concluded:
** An Athlon 700 processor is way overkill for what you need. (Again, assuming this is an "ordinary" web server.) An AMD K6-2 or K6-3 running at 233 MHz should be fine (although you probably can't get a new one with less than 500 MHz nowadays...) Web servers are more I/O intensive than they are CPU intensive. Especially since they don't run GUIs, or if they do, the GUI is idle at the login screen most of the time! And if you really want the fastest chip available, an Athlon 700 is already "slow".
** Your most difficult task will be finding a motherboard which supports the Athlon 700 adequately. One strategy is to go to the overclocking web pages (search "overclocking" at www.google.com) and see which motherboards overclock best with your CPU. Not that you should overclock, especially on a production server! But if a motherboard performs OK overclocking your CPU, it should do an adequate job running your CPU at its proper speed.
** 256K MB RAM may or may not be adequate. Since memory is the cheapest way to increase performance at high server load, why not add more?
** 3 x 20 GB IDE (1 primary, 1 for RAID, 1 for backup) should be fine capacity-wise. Are you using hardware RAID or software RAID? Software RAID is pretty unreliable on IDE. Will you have easy access to the computer when you have to work on it? Or does the ISP have good support quality, and will they handle RAID problems for you? One thing we want to try (but haven't tested yet) are the 3Ware RAID I cards.
** IDE vs SCSI. SCSI may give better performance when multitasking. Of course, it's partly a religious issue how much that performance gain is. Given that a web server is by nature a disk-intensive application, SCSI is at least worth looking into. Of course, SCSI is also a pain to install and maintain because you have to make sure the cables are good quality, ensure they are properly terminated, etc.
** 100 Mbps Ethernet card. Are you sure your ISP's network is 100 Mbps? 10 Mbps should be fine. If your server saturates a 10 Mbps line, you're probably running video imaging applications and paying over US$7000/month for bandwidth. Make sure your Ethernet card operates well at 100 Mbps; many 10/100 Mbps "auto-switching" cards don't auto-switch that well.
** OpenSSH for remote admin. Sure.
The biggest FTP site in the world, ftp.cdrom.com, runs on an ordinary PC with FreeBSD. And the Zopistas at the Python conference in January said Zope can handle a million hits per day on an ordinary PC.
***** There are several ways to integrate Zope with Apache. We chose the "proxy server" way because it allows Zope's web server (Zserver) to multitask. You run Apache at port 80, Zserver at 8080, and use Apache's ProxyPass directive to relay the request to Zserver and back. You have to do some tricky things with mod_rewrite and install a scary Zope product, but it works.
(Scary because it involves modifying the access rules for the entire Zope site, which can lock you out of your database if you're not careful, and because it makes Zope think your hostname/port is what Apache publishes them as, rather than what they really are, and this can also lock you out of your database if Apache isn't running or the rewrites or proxying aren't working. I refused to implement virtual hosts on our Zope server--because they also require playing with access rules--until a safer way comes along. Why not let Apache handle the virtual hosting since Apache is good at it? You can use a separate Zope folder for each virtual site, or even run a separate Zope instance for each.)
In the end, we decided not to go ahead with wide-scale deployment of Zope applications. This was because:
- Adequate Zope documentation was missing. Most documentation was geared for the through-the-web DTML content manager rather than the application programmer. It was a matter of knowing a method to do X must exist, then scouring the docs to find the method name, then guessing what the arguments must be.
- Zope wants to do everything in its own private world. But text files and CGI scripts can handle 3/4 of the job we need.
- Zope's main feature--the ability to delegate sections of a web site to semi-trusted content managers who will write and maintain articles using the web interface--was not really what we needed. Our content managers know vi and know how to scp a file into place. They aren't keen on adjusting to a new interface--and having to upload/download files into Zope's database--when it provides little additional benefit for them.
We decided what we really needed was better CGI tools and an Active Server Pages type interface. So we're now deploying PHP applications, while eagerly waiting for Python's toolset to come up with an equivalent solution.
Disclaimers: yes, Zope has some projects in development which address these areas (a big documentation push, Mozilla-enhanced administration interface, WebDAV [when vi supports it] for editing and configuring via XML, built-in support for virtual hosts, a "distributed database" that an ordinary filesystem directory can be a part of), but these are more or less still in the experimental stages (although deployed by some sites). And yes, Python has Poor Man's Zope and Python Server Pages and mod_python, but these are still way in alpha stage and not as optimized or tested as PHP is. I also want to look into AOLserver's embedded Python feature we read about in October (http://www.linuxgazette.com/issue58/washington.html), but have not had the chance to yet.
[Mike again] I forgot to mention MySQL.
Our web server runs MySQL alongside Apache and Zope. MySQL is called by CGI applications as well as Zope methods.
It took a while to get MySQLdb and the ZMySQLDA (the Zope database adapter) installed, but they're both working fine now. I spent a couple weeks corresponding with the maintainer, who was very responsive to my bug reports and gave me several unreleased versions to try. These issues should all be resloved now.
One problem that remained was that ZMySQLDA would not return DateTime objects for Date/DateTime/Timestamp fields. Instead it returned a string, which made it inconvenient to manipulate the date in Zope. One problem of course is that Zope uses a same-name but incompatible DateTime module than the superior one the rest of Python uses (mxDateTime). I finally coded around it and just had the SQL SELECT statement return a pre-formatted date string and separate month and year integers.
Dear Mike,
thank you so much for a really comprehensive answer to my questions. Of course, it raises a few more questions for me, but I think the view is a bit clearer now.
Yes, I did mean colocation (co-location?). It's a term I have some problems with as it seems to suggest putting something in two places at one time.
We might be fortunate in that the funding for this is unlikely to come through before 2.4 about which I hear "around Christmas, early New Year". And even more so in that we could probably get away with hiring some server space for a month or two while we played around with the new server and tried to break it. Of course, this might well mean doing without much in the way of interactivity, let alone a database driven solution, but we can probably survive on static pages for a while and get some kind of income dribble going.
My inclination would be to go with software Raid and IDE (hence the attempt to break it!) but I will consider the other alternatives.
Ultimately whether we go with Zope (and in what context vis-a-vis Apache, or Zap) is going to have to depend on whether I can get it up and running to my satisfaction at home, but it's good to be reminded that PHP is a good alternative.
Once again, many thanks.
From Alex Kitainik
Answered by: Heather Stern
Hi!
I've found 'neighbour table overflow' question in your gazette. Explanation for this case seems to be not complete although. The most nasty case can happen when there are two computers with the same name in the LAN. In this case neighbours' search enters endless loop and thus 'neighbour table overflow' can occur.
Actually, the arp cache doesn't care about names - it cares about MAC addresses (those things that look like a set of colon seperated hex values in your ifconfig output). But, it is a good point - some cards are dip switch configurable, and ifconfig can change the 'hw ether' interface if you ask it to.
Between arpwatch and tcpdump it should be possible to seriously track down if you have some sort of "twins" problem of either type, though. At the higher levels of protocol, having machines with the same name can cause annoying problems (e.g. half the samba packets going to the wrong machine) so it's still something you want to prevent.
PS. I apologize for my English (it isn't my mother tongue...)
Regards -- Alex.
Your English is fine.
From Kopf
Answered by: Ben Okopnik
Hi,
I want to set up a home network, with 2 machines - workstation & server. The problem is, I want to configure Linux so that if I use the workstation, nothing is saved on the local drive, everything is kept on the server, so that if I shut down the workstation, and I go up to the server, I can work away there, without any difference of environments between the 2 boxes.
Another problem is, I'm a bit strapped for cash, so I don't want to buy a server & networking equiptment until I know what I want to do is possible.
Thanks!
Kopf
Not all that hard to do; in fact, the terms that you've used - workstation and server - point to a solution.
In the Windows world, for example, those terms have come to mean "basic desktop vs. big, powerful machine." With Linux, the meanings come back to their original sense: specifically, a server is a program that provides a service (and in terms of hardware, the machine that runs that program, usually one that is set up for only - or mainly - that purpose.)
In this case, one of a number of possible solutions that spring to mind is NFS - or better yet, Coda (http://www.coda.cs.cmu.edu). Either one of these will let you mount a remote filesystem locally; Coda, at least in theory (I've read the docs, haven't had any practice with it) will allow disconnected operation and continuous operation even during partial network failure, as well as bandwidth adaptation (vs. NFS, which is terrible over slow links.) Coda also uses encryption and authentication, while NFS security is, shall we say, problematic at best.
Here is how it works in practice, at least for NFS: you run an NFS server on the machine that you want to export from - the one you referred to as the "server". I seem to remember that most distributions come with an NFS module already available, so kernel recompilation will probably not be necessary. Read the "NFS-HOWTO": it literally takes you step-by-step through the entire process, including in-depth troubleshooting tips. Once you've set everything up, export the "home/kopf" directory (i.e., your home directory) and mount it under "home/kopf" on your client machine. If you have the exported directory listed in your "/etc/fstab" and append "auto" to the options, you won't even have to do anything different to accomodate the setup: you simply turn the machine on, and write your documents, etc. Your home directory will "travel" with you wherever you go.
Since you mention being strapped for cash, there's always another option: put together a minimal machine (say, a 486 or a low-end Pentium) that does nothing more than boot Linux. Telnet to your "big" machine, work there - run a remote X session, if you like. Other advantages of this setup include the need for only one modem (on your X/file server), the necessity of securing only a single machine, and, of course, the total cost. I would suggest spending a little of the money you save on memory and a decent video card, though - not that X is that resource-intensive, but snappy performance is nice to have. 32-64MB should be plenty.
I also suggest reading the "Thinclient-HOWTO", which explains how to do the NFS "complete system export" and the X-client/server setup.
Ben Okopnik
Hi! Thanks for all the great info!
What you've said has really enlighened me - I had never thought of remote mounting and stuff like that. Just one question, if I were to mount "/" on the server as "/" on the workstation, how much diskspace would I need on the workstation to start up Linux until it mounts all the drives? Or would I use a bootdisk to do this, and have absolutely no partition for Linux on the workstation?
You could indeed boot from a floppy, but it's a longish process, and floppies are rather unreliable; I would think that scrounging around can get you an small HD for just a few dollars. One of the things I really appreciate about Debian is that you can do a "base setup" - a complete working Linux system with networking and tons of system tools - in about 10 minutes, on about 20MB worth of drive space. I seem to remember that Slackware does much the same thing.
As to how much diskspace: you really don't need any. You could even set your machine up as a terminal (a bit more of a hassle, but it eliminates the need for even a floppy.) An HD is nice to have - as I've said, booting from one is much more convenient - but start with the assumption that it's a luxury, not a necessity. From there, everything you do is just fun add-ons.
The point to this is that there are almost infinite possibilities with Linux; given the tremendous flexibility and quality of its software, the answer to networking questions that start with "Can I..." is almost always going to be "Yes."
Also - I know the risks associated with allowing "(everyone)" to mount "/" or even "/home/" on Linux... Would I be able to restrict this to certain users, or even certain computers on the network?
Thanks for all the help!
Kopf
"Would I be able to..." qualifies; the answer is "Yes". The "NFS-Howto" addresses those, and many other security issues in detail.
Ben,
by the way, you talked about putting in about 32mb of Video memory into one of the computers to enhance X performance.. Which computer would I put it in, the X Server or Client?
Thanks!
Perhaps I didn't make myself clear; I believe I'd mentioned using a decent video card and 32MB of system memory. In any case, that's what I was recommending. Not that X is that hungry, but graphics are always more intensive than console use - and given the cost/performance gain of adding memory, I would have a minimum of 32MB in both machines. As to the video card, you'd have to go far, far down-market to get something that was less than decent these days. A quick look at CNet has Diamond Stealth cards for US$67 and Nvidia Riva TNT2 AGPs for US$89, and these cards are up in the "excellent" range - a buck buys a lot of video bang these days!
Ok, well, you've answered all questions I had!
Now 'tis time to make it all work.
Thanks again!
Kopf
Answer by Robert A. Uhl
I've some brief information on DSL for Linux.
Several phone companies do not officially support Linux since they do not have software to support our favoured platform. Fortunately I have found that it is still possible to configure a DSL bridge and have had some success therewith.
Let me note ahead of time that my bridge is a Cisco 675. Others may vary and may, indeed, not work.
The programme which you will use in place of the Windows HyperTerm or the Mac OS ZTerm (an excellent programme, BTW; I used it extensively back in the day) is screen, a wonderful bit of software which was included with my distribution.
To configure the bridge, connect the maintenance cable to the serial port. First you must su to root, or in some other way be able to access the appropriate serial port (usually /dev/ttyS0 or /dev/ttyS1). Then use the command
screen /dev/ttySx
to start screen. It will connect and you will see a prompt of some sort. You may now perform all the tasks your ISP or telco request, just as you would from HyperTerm or ZTerm.
One quits screen simply by typing control-a, then \. Control-a ? is used to get help.
Hope this is some use to some of the other poor saps experiencing DSL problems.
-- Robert Uhl
If I have pinged farther than others, it is because I routed upon the T3s of giants. --Greg Adams
... so mike asked ...
Hmm, I have a Cisco something-or-other and it's been doing DSL for Linux for almost two years. The external modems are fine, because there's nothing OS-specific about them, you just configure them in a generic manner.
It's the configuration that can be trouble. When I've called the telco, they've wanted to start a session to get various settings. 'Pon being informed that I'm using Linux, it has generally been `Terribly sorry, sir, but we don't support that.'
There's two ways to configure it: via the special serial cable that came with it or via the regular Ethernet cable using telnet. I tried telnet first but I couldn't figure out the device's IP number (it's different for different models and that information was hard to get ahold of). So I plugged in the serial cable and used minicom as if it were an ordinary null-modem cable. That worked fine.
I had a deal of difficulty with minicom. Screen seems to be doing a right fine job, at least for the moment. Figured I'd let others know.
Enjoy your magazine.
-- Robert Uhl
[Mike] I said that I had used minicom on my Cisco 675 bridge at home.
Guess what. I had to configure a router at work last week. On DSL with a Cisco 675 bridge. Minicom didn't work. Screen did. And I never would have thought of using screen if it hadn't been for this TAG thread.
I pulled out the serial cable inside the box and reseated it before using screen, just in case it was loose, so perhaps it wasn't minicom's fault. But at least now I have more alternatives to try.
-- Mike Orr
Answer from Roy
Want to set a sticky note reminder on your screen? Create the tcl/tk script "memo"
#!/usr/bin/wish
button .b -textvariable argv -command exit
pack .b
and call it with
sh -c 'memo remember opera tickets for dinner date &'
Want to make a larger investment in script typing? Then make "memo" look like this:
#!/usr/bin/wish if {[lindex $argv 0] == "-"} { set argv [lrange $argv 1 end] exec echo [exec date "+%x %H:%M"] $argv >>$env(HOME)/.memo } button .b -textvariable argv -command exit .b config -fg black -bg yellow -wraplength 6i -justify left .b config -activebackground yellow .b config -activeforeground black pack .b
and the memo will appear black on yellow. Also, if the first argument to memo is a dash, the memo will be logged in the .memo file. The simpleness of the script precludes funny characters in the message, as the shell will want to act on them.
In either case, left-click the button and the memo disappears.
Preceed it with a DISPLAY variable,
DISPLAY=someterm:0 sh -c 'memo your time card is due &'
and the note will pop up on another display.
[This article contains 937 KB of inline images. Click here to begin reading. Use your browser's Back button to return. -Ed.]
Courtesy Linux Today, where you can read all the latest Help Dex cartoons.
I read your 2C tip regarding finding the rpm package a certain file belongs to in the latest edition of linuxgazette.com http://www.linuxgazette.com/issue58/lg_tips58.html
I have no idea if this is what you mean instead of your script but here it goes:
# rpm -qf /usr/bin/afm2tfm
tetex-afm-1.0.7-7
So the file /usr/bin/afm2tfm belongs to tetex-afm-1.0.7-7.
Is this what you meant?
Regards,
Richard Torkar
Hi I recently downloaded smalllinux (kernel 2.2.0) and have had slight
trouble getting tiny X running.
I tries to load onto /dev/tty5 and smalllinux only has four tty's for VT.
how do you use mknod? I can't make out what major and minor numbers are and
they are required to make the device.
Anyway hope you can help me...
Thanx
Vlaad
The major number should be the same, and the minor number should increase by one for each. You should be able to see the pattern if you do
ls -al /dev/tty[0-9]
Under bash, the usual linux shell, those brackets indicate that it could be any character 0 through 9 there.
Well i found out what to do by probing LDP but thanx for reding these emails
I believe it was..
# mknod -m 666 /dev/tty5 c 4 5
I think this is what I need for a X server anyway!
well let me kow if on the right track!1
Thanx
VlaAd
Yep, you're right on target! -- Heather
Has anyone out there had any luck setting up Lexmark Printers on any Linux distribution?
We've been using networked Lexmarks for years at LJ.
Key (at least for the older ones we use) is configuring as a network printer using port 9100, with something like:
:lp=lex2.ssc.com%9100
for the lp line in your printcap entry.
We've a newer one on order, will post if the key is something very different.
--- Dan
In an answer to Joseph Annino you forgot to mention two details. One - you will need to run some getty program on a serial console you want to log in so an entry in /etc/inittab will be needed. As modem controls are not used then 'mingetty' should work fine but most anything ('agetty', 'getty_ps', 'mgetty', ....) also will do.
Also Josephs wants to use that console for admnistration hence, presumably, he wants to login there as 'root'. If this is indeed that case then an entry for the console port in /etc/securetty will be also needed or logging will run into some problems.
I also have a comment to Richard N. Turner entry about cron jobs. I would be much more careful with sourcing there things like /home/mydir/.bash_profile. Cron jobs run unwatched and possibly with root priviledges. Unless you can guarantee that something nasty will not show up in a sourced file now and anytime in the future you can be for a rude surprise. Setting precisely controlled environment in a script meant to be run from cron is much more appealing option. Depending on the whole computing setup such arrangements with sourcing can be ok, although I prefer to err here on a side of a caution, but readers should be at least aware about a potential big security hole.
Regards,
Michal
On Mon, Nov 06, 2000 at 08:50:40AM +0000, Clive de Salis wrote: Dear All
I've converted my office in Birmingham in the UK to run entirely on Linux using Slackware and have successfully run the business for nearly 3 years now without my customers realising that I don't run Windows or use Microsoft Office.... Which just shows that it can be done.
I'm getting ready to convert the Monmouth office to the same using the Mandrake distribution. There is, however, one software application that I can't find for Linux ... and that is the equivalent to Microsoft Project. Do you know of a Gant Chart based project planning tool for Linux?
Good to hear yet another "Linux in business" success story! Project management software for Linux is not a huge field, although there seem to be at least several groups - some with rather serious money behind them - working on remedying the lack. There are several pieces of software already in existence that use Gantt charts; check out
http://linas.org/linux/pm.html
for a good start on software in the Call Center, Bug Tracking and Project Management categories.
Good luck in your endeavours,
Ben
Joe kaplenk is dedicated to the teachings about the UNIX alike operating systems. He is the author of many operating systems administration books including UNIX System Administrator's Interactive Workbook and Linux Network Administrator's Interactive Workbook.
OLinux: Tell us about your career, personal life (age, birthplace, hobbies, education...)
Joe Kaplenk: I was born in Middletown, NY. I'm 53. My current hobbies include reading and watching history. I'm particularly fascinated with population migrations and the development of various nations. World War II and the rise of the various political movements fascinate me. The other hobbies include computers, of course, teaching and reading on technical business trends.
My college background includes going to Rensselaer Polytechnic Institute in Troy, NY where I majored in Math with a minor in Physics. From there I went to the University of Utah and graduated in Physics. My undergraduate interests included quantum mechanics. It was something I would only study for about an hour week and did very well in. I could recite much of the history of quantum mechanics while I was in High School.
My favorite instructor was Robert Resnick at RPI. He wrote the premier text series for undergraduate Physics. I was very fortunate to get in his class since there was a long waiting list. He made Physics very real and more exciting for me. Issac Asimov was my favorite author. Both of them influenced me to go into writing.
For graduate work I studied courses without a major in Chemistry, Biology, Biochemistry and Journalism and worked part-time as a science reporter for the Daily Utah Chronicle, the campus paper. I hoped to go into graduate school in Biochemistry and Biophysics and had been accepted at several colleges after this. One of my fascinations included studying the effects of radiation on genetics. I believed that it would be possible to find a way to selectively modify genes with radiation, given the right parameters, and was hoping to pursue this line of research. Several of my advisors advised me against it and that it would never work, but I felt strongly this was worth pursuing.
However, after much thought I left graduate school at the University of South Carolina my first week. At this point I decided that it would be too much effort and the money wasn't there to support me. In the early seventies I spent several years in the southern United States helping in rural black communities. My religious beliefs as a Baha'i strongly influenced me in this.
My wife Ramona has been a really good support network for me. She's the love of my life. I have a daughter, Anisa, from a previous. She has been an outstanding student and has received a number of commendations.
OLinux: For what company do you work and what is your job nowdays?
Joe Kaplenk: I am currently working for Collective Technologies as a consultant. Some of my assignments have been working with Red Hat Linux, but most of them have been with Solaris. Previous to this, and until last March, I worked with IBM Global Services and did some Linux work there as well as supporting Solaris and AIX. In this position I was on the international team that did the IBM Redbooks on Linux. I looked within IBM for opportunities to do more Linux, but did not find anything that was satisfactory at that time.
Some of my spare time is on teaching system admin, researching ways to teach, and developing new methods of teaching. Other time is spent playing with various software, doing installations and testing. The rest of the time is spent on family things.
OLinux: When did you started working with Linux? What was your initial motivation and how do you see it nowadays?
Joe Kaplenk: My first exposure to Linux was around 1992. I was working as the main UNIX system administrator at Loyola University Chicago. There were several students that worked for me. We were all keeping up closely with the USENET, the internet news groups. They found something about Linux online. We had been playing with Minix, which was actually used in one of the Math classes. This was prior to release 1.0. The students were very excited when Linux 1.0 was released. This meant to then that it could now be more stable. It wasn't long after that that Yggdrasil Linux was released. We downloaded the code, did some installs and played with it.
I thought this was great since this gave the students an opportunity to play with a UNIX like operating system as root without causing havoc on production servers. We were running AT&T 3B2s at that time. These were the standard boxes for UNIX development then, so much of what they did on Linux could be done on UNIX also.
I see Linux as being a major player in the operating system arena in a very short time. Linux will not kill all the other versions of UNIX. But I do see a reduction in the versions. With the GNOME foundation being developed and settling on a common desktop for several versions of UNIX it will make Linux even more widely used. However, there are some things that proprietary operating systems can do better. They can be more focused on new apps, throw money at it, and bring together talent quickly to solve a problem. The Linux community is largely dependent on finding developers to do the projects that often do it for free or for the love of the project. But quick development and focus are not necessary attributes of this model. So both models will continue to be used.
OLinux: What role do you play in the Open Source world these days?
Joe Kaplenk:One of my major efforts at the moment is in bringing Linux in the training and academic system administration training area. Recently I attended and did a presentation at Tech Ed Chicago 2000. The presentation covered what I consider are the major areas of difficulty in teaching system admin. I hope to have it on my website shortly at http://users.aol.com/jkaplenk. I did it in Star Office and want to make it available in other formats also.
This conference attracted educators and trainers from universities, colleges, companies and institutions in Illinois. At the conference it was strongly emphasized that there is an increasing shortage of system administrators. The need to develop training programs needs to be given a high priority.
The role I see myself playing is in helping to develop programs for training system admins. Because Linux allows itself to run in more places than any other operating system it is a natural solution to the problem. Students can learn and develop skills that they might not otherwise have. The materials I developed over the years developed into my first two books, the UNIX System Administrator's Interactive Workbook and the Linux Network Administrator's Interactive Workbook. They also formed the start of the whole Prentice-Hall series on Interactive Workbooks.
OLinux: As an educator, what do you think about this Linux certification services proliferation? Beside your books, how can extent your Linux/UNIX knowledge to the users?
Joe Kaplenk: Some employers are demanding Linux certification. My last assignment was one that required me to have my Red Hat Certified Engineer (RHCE), which I have. Personally, I think certification is overemphasized and the important thing is what the admin has done and can do. The RHCE comes the closest to being a true test because it has three parts. The first is multiple choice, the second is debugging and the third is installation. The other certifications that I am aware of do not have this. They are only multiple choice type questions. As an instructor that uses multiple choice questions, I am very familiar with the failings and I try to balance this with hands-on work.
I took the Sair Linux certification test right after passing the RHCE test. I passed 3 of the 4 sections, but took the networking twice. I failed the first time, so I answered any suspect questions differently the second time. It made no difference in the final result. I teach networking, have been doing it for 16 years and have written books on it. The pre-test material says that you only need to have several years experience. This indicates to me that there is some failing that will need to be looked at. While someone can and I'm sure has passed it, they may have passed it not because of knowledge but because of choosing the answers that were being looked for. But I know that sometimes the only way to find out whether is test is good is to give it, so I'm sure with time the bugs will be ironed out. The best test is real-life experience.
As I solve problems or during installs I have started writing up docs that explain the process. My focus is usually on the process itself. The outcome is important, but I figure that if I can speed up, clarify, or make easier the steps I have been a success. In one job I decreased the process time from two months to two weeks by analyzing and automating as much as possible. Eventually I'll have my own set of docs that people can refer to for these processes.
OLinux: How good are the Linux support services? Can you indicate some failure in these services?
Joe Kaplenk: I don't have a lot of experience with Linux support services other than doing them. Currently there are a lot of opportunities to do Linux support and this will grow rapidly because of the growth of Linux. Someday the CIOs are going to wakeup and see that they have production Linux boxes and their support guy just left. They will need to find someone to help them out.
The only failures might be in the lack of planning and training for what is becoming a tidal wave of demand for Linux. I have been a user of Solaris and AIX services and my observation is that Linux will be at those levels soon if it isn't already.
OLinux: What are the better and the worse Linux platform features in comparison with Windows platform?
Joe Kaplenk: My jobs have required me to work with DOS, Windows 3.1/NT/95,98, AIX, Solaris, HP-UX, AT&T UNIX and BSD. As a result I have come in contact with many of the features, good and bad, of these operating systems.
Linux is very scaleable. Ignoring hardware memory requirements, Linux can be put on wristwatches or IBM mainframes and run the same program.
The Linux source code is accessible so that a developer can figure out how to talk to the operating system. All the system calls are documented. This is not found in Windows where many system calls are hidden and only Microsoft knows about them. This gives MS a competitive edge. A Linux developer can know exactly what to expect whereas oftentimes Windows developers are shooting blind and hope they hit the target with enough ammo.
Windows does have some good points. It is widely used. There are many applications that only run on Windows, so that the user is forced to use Windows. However the open source community is coming along very quickly and providing equivalent functionality to Linux programs. Microsoft spends a lot of time and money testing applications on users to determine the best way to make something available to the user. I find some Linux apps confusing. They have simplified the process greatly. As long as you do things the Microsoft way and buy Microsoft products you won't have a problem.
But the problem is that there are many software manufacturers that write for Windows and really don't seem to have a clue. I've installed McAffe Office 2000, Norton Utilities and various Norton antivirus products over the years and inevitably remove them. After the installs my boxes will slow to a crawl, crash more often, lose icons and various other insanities. I figure that for about 5 years I could count on spending 12 weeks a year trying to fix my MS boxes and ultimately I would have to reinstall the whole mess. My final solution is to never install anything that gets to close to the operating system like these utilities. Then the boxes run a lot better. But I lose out the functionality of the software. Basically, if I leave it alone once it is running then it works great. But this loses a lot of the fun.
With Linux, and UNIX in general, the operating system and the apps are practically always separate. So when you upgrade to another version of the various system monitoring tools the system runs without a problem. If there is a problem the developer, whose email address is available, can fix it very quickly.
Microsoft is in a difficult position. They are trying to control the process while giving a certain amount of flexibility to other companies. They realize that other developers create ideas more quickly than MS. So if they let others develop the ideas then Microsoft can buy these companies out, steal the ideas or put them out of business. This model won't last long. While MS has been pushing the UCITA laws that passed in Virginia and which prevent reverse engineering they will have closed one of the doors they use. I'm reminded of TI that had the best 16 bit microprocessor in the late 80s. I think it was the 9600. But HP decided they could control the process and tried to design 96% of the software. Eventually people went elsewhere and the processor did not achieve its goals.
OLinux: What does mean the big companies, like IBM, involvement with Linux? Is it really good for the Linux community?
Joe Kaplenk: The Linux community is tending to go in two directions. There is the Free Software Foundation or the GNU/Linux group that is devoted to the purity of the GNU GPL license. These people are very fanatical about keeping Linux in the direction that it started in. This is represented commercially by the Debian GNU/Linux distribution.
However, the other direction is that many companies such as IBM are getting involved. They are finding that they can make a lot of money on Linux services. Let's remember that Bill Gates got his start because IBM didn't want to develop an operating system for the PC. They figured the money was in the hardware. This same mentality is still there. The operating system can sell the hardware. If IBM can sell more boxes by using Linux then they will. IBM is adding their apps to run on Linux. They are pushing Linux because they know the market is going to Linux and they can sell their apps and services on Linux and make money that way. In IBM's world Linux is one more product to support and make money.
I don't see IBM creating their own distribution unless it is for some specialized application such as Point of Sale Equipment (POS) used in stores or for ATM machines. These have special requirements and even in this case they would probably contract with someone else.
There are several manufacturers putting their own front ends on Linux or developing their own version of Linux. But if the libraries and kernel can continue to be compatible then I think Linux will be okay. There may be forks, but the good ideas will be brought back in.
I do see the GNU/Linux folks getting frustrated at some of the directions and I would expect that this will give more impetus to the HURD kernel development. This is the GNU operating system that Richard M Stallman was working on before Linux got fired up. If the Linux community doesn't have a place for them then they may move on to their own kernel and distribution separate from the other Linux distributions. Fortunately FSF has felt very strong about their apps being able to run on as many operating systems as possible, so this shouldn't be too painful to the Linux community.
OLinux: In your opinion, what improvements and support are needed to make Linux a wide world platform for end users?
Joe Kaplenk: Usability is constantly emphasized in the Linux/business community and I agree with this. When I can sit my mother-in-law down at the computer and she can use Linux as easily as Windows then we'll be there. When she realizes that the box doesn't have to be rebooted for silly things that Windows does then it will be a solid sale. Most users don't care about the operating system. They want to use it. Windows has a lot of ease of use and wide usability built-in. Linux is getting close. I try to use Linux whenever I can and am moving things over. I have two windows boxes and a laptop running Windows. My Windows needs have decreased and except for arhived stuff, I don't use my other two Windows boxes. My laptop runs Windows only because I use AOL for my dialup on the road and for some other apps.
OLinux: What was the last book release? Is there any new publication under way?
Joe Kaplenk: My last solo book with the Linux Network Administrator's Interactive Workbook. My last team effort was the IBM Redbook series on Linux which was recently published by Prentice-Hall. This is a four-book series.
There are no publications currently underway. I have been gathering my thoughts and hope to publish a UNIX system administration book based on my research. I plan to merge my first two books and incorporate several very unique concepts that I feel can make teaching and learning system admin much easier. I have a contract offer from Prentice-Hall that I am evaluating. Once I sign the contract the writing will take up most of my spare time.
Joe Kaplenk: Three years ago my goal was a book a year. In two years I had two book published solo and four books as part of a team. I'm basically on track or ahead of schedule.
OLinux: What were your most successful book? How many copies where sold? Did it have many translations to other countries?
Joe Kaplenk: I don't have numbers on the Redbooks, but the UNIX System Administrator's Interactive Workbook was the best seller for the solo books. It has sold at least 20,000 copies. But the numbers are usually up to nine months behind. The networking book was intentionally limited in content to allow the user to just build a network and so didn't sell as well.
There are no translations into other languages as far as I know.
OLinux: How do evaluate the sharp fall os stocks as VA Linux yesterday? Is it possible to make money as a linux company? How do address this problem?
Joe Kaplenk: It was inevitable because new tech stocks in general have been the darlings of the stock market. Linux fits this role perfectly. I also suspect that something was going on that was unanticipated by this process. As I interpret this situation people were doing after-hours bids for the VA Linux stock before it sold. When investors and brokers saw the prices that people were willing to pay, I suspect they made the opening price ridiculously high. As a result many people made quick fortunes. Since the stocks were way overpriced they quickly dropped.
I think the investors in the stock market IPOs have learned their lesson. The IPOs will not be the rockets they once were. Though there are occasional blips.
The biggest money to be made in Linux is in services and training. We will very quickly see this happening. Hardware does not make as much money and neither does the software. Though advanced software such as backup software does sell as well on Linux as on other platforms.
OLinux: What kind of relation do you have with Linux community? Do you currently work for any linux orgs?
Joe Kaplenk: I don't have any formal relations with the Linux community other than being a member of several of the local Linux groups. I am also a member of Uniforum. My time has been so busy with my writing, research, teaching and working that I have avoided additional time commitments. I get over 100 emails a day that I have to deal with also.
I don't work for any Linux orgs, but I do occasionally get assignments that originate from Red Hat.
OLinux: Leave a message to our users.
Joe Kaplenk: Linux is going mainstream. This is an irreversible process. If you want to succeed career-wise and financially you need to understand the obstacles and have some wide experiences with several operating systems. You also need to get down and dirty and play in the sandbox. This means tearing apart the boxes and the software and becoming involved (or should I say intimate?) with them.
Just like the early revolution with PCs and DOS this will move by very quickly. Ten years down the road it might be something else. It won't be MS and Windows and maybe not Linux. So take advantage of it while you can. Keep yourself open to new ideas so that you can again be there when it comes around.
My email is jkaplenk@aol.com and I am always open to other ideas. Educators that are working on the same issues in training system admins as I am are especially encouraged to contact me.
It has been a very busy year for the Linux Professional Institute. At shows and convention halls around the world, people are interested in Linux*, and people want to know more about where Linux is headed and the demands that will be placed upon the administrators of these systems. The spectrum of the interested Linux participant runs the full scale. From the outright beginner who doesn't have a clue what all the buzz is over this new operating system, to the down and dirty hands on faster than a speeding bullet kind of person; everyone wants to be a player in the Linux game.
The audiences for the Linux Professional Institute have been enthusiastic and engaged. It seems everyone has a different opinion of what is truly needed within the Linux community. But one thing that most seem to agree on, that as Linux grows in demand, more professionals will be needed, and a standard for competence is the best approach to ensuring qualified professionals.
Linux appears no longer to be a fringe operating system. The representation at COMDEX/Linux Business Expo in Las Vegas of 47,000 square feet and the participation of the attendees from COMDEX was a sight to behold. Tens of thousands of people toured the Linux related companies and I am sure that the result will be in sales. As sales continue to increase for these companies, a demand will be generated for Linux professionals. For the Linux professional, having the right knowledge, the right tools, and the right qualifications will be everything.
And the Linux Professional Institute(LPI) has helped bridge the wide gap between the many different players within the Linux professional ommunity and the ever growing "newbie". The Linux Professional Institute; (www.lpi.org) has been spreading the word about Linux and certification of Linux professionals for the last two years. Their web site and related information an now be read in French, German, Greek, Japanese, Polish, Spanish, English. Chinese is soon to be added, with more to come. All of this has been done with volunteers from around the world. LPI continues to be organized and run with the help of thousands of people from every part of the globe.
Japan just recently went online with Virtual University Enterprises (VUE) for the Linux Professional Institute's tests for certification of Linux administration. This was a milestone within the Linux community. The Japanese are embracing Linux in a big way, and they believe in the certification process. LPI views the interest in Linux in Japan, as one more piece of evidence that Linux is here to stay. There is currently an LPI- China being started by a group of professionals and educators in China. Sponsors have been contacted and enthusiasm runs high for this group as they ready to embrace their Linux audiences.
Enthusiasm and a sense of community has taken hold for the Linux Professional Institute. A lot of work remains to be done, and anyone interested in participating or volunteering their expertise or time should contact wilma@lpi.org, dan@lpi.org or info@lpi.org. The organization continues to work on the next set of exams to be administered. Level two testing is scheduled to begin in the first quarter of 2001. Stay tuned to the web site for further details.
The Linux Professional Institute invites all Linux enthusiasts and professionals to participate in their certification exams and become known as an LPI -1, LPI -2 or LPI -3 Professional. This certification is currently being recognized by IBM, SGI, HP, SuSE, and TurboLinux, a partial list. Since the first exam was taken; just a few months ago in June; there have been exams taken in almost every part of the globe. The top five countries (for most tests taken, in descending order) were the U.S, Germany, Canada, Netherlands, and Japan. But, there were participants from Taiwan, Switzerland, Pakistan, Ethiopia, Brazil, Bulgaria, China, Ecuador and many more.
As has been the case for the last two years, 2001 appears to be even busier for LPI. January brings the Linux Professional Institute to Amsterdam, and New York, then to Paris in February. The rest of the year will ontinue on this pace with appearances around the globe. To follow our movements, log on to http://www.lpi.org/i-events.html. To help us at shows and conventions, contact info@lpi.org or wilma@lpi.org.
Come join the Linux Professional Institute and all their sponsors in challenging yourself as a Linux enthusiast. We invite you to participate in our testing procedure which leads to certification. Certification of professional Linux administrators. Be part of a world-wide organization where you an make a difference. Join our many discussions through mailing lists or help staff booths in different parts of the world. The Linux Professional Institute invites anyone interested in helping the organization through its next year of progress, to log on to www.lpi.org and click on "Getting Involved". We're looking forward to another great year, and we hope you will be with us for the ride.
*Linux is a trademark of Linux Torvalds; Linux Professional Institute is a trademark of Linux Professional Institute Inc .
[Eric also draws the Sun Puppy comic strip at http://www.sunpuppy.com. -Ed.]
Overview
GnuPG is a tool for secure communication and data storage. It can be used to encrypt data and to create digital signatures. GnuPG is a complete and free replacement for PGP. Because it does not use the patented IDEA algorithm, it can be used without any restrictions. GnuPG uses public-key cryptography so that users may communicate securely. In a public-key system, each user has a pair of keys consisting of a private key and a public key. A user's private key is kept secret; it need never be revealed. The public key may be given to anyone with whom the user wants to communicate.
Features
You can find all the software related to GnuPG at http://www.gnupg.org/download.html
Installation
Copy the gnupg source file to ./usr/local/ directory or wherever you
want to install it and then cd to that directory.
[root@dragon local] tar xvzf gnupg-1.0.4.tar.gz
[root@dragon local]# cd gnupg-1.0.4
[root@dragon gnupg-1.0.4]# ./configure
[root@dragon gnupg-1.0.4]# make
This will compile all source files into executable binaries.
[root@dragon gnupg-1.0.4]# make check
It will run any self-tests that come with the package.
[root@dragon gnupg-1.0.4]# make install
It will install the binaries and any supporting files into appropriate
locations.
[root@dragon gnupg-1.0.4]# strip /usr/bin/gpg
The "strip" command will reduce the size of the "gpg" binary for better
performance.
Common Commands
1: Generating a new keypair
We must create a new key-pair (public and private) for the first time.
The command line option --gen-key is used to create a new primary keypair.
Step 1
[root@dragon /]# gpg --gen-key
gpg (GnuPG) 1.0.2; Copyright (C) 2000 Free Software Foundation, Inc.
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions. See the file COPYING for details.
gpg: /root/.gnupg: directory created
gpg: /root/.gnupg/options: new options file created
gpg: you have to start GnuPG again, so it can read the new options
file
Step 2
Start GnuPG again with the following command:
[root@dragon /]# gpg --gen-key
gpg (GnuPG) 1.0.2; Copyright (C) 2000 Free Software Foundation, Inc.
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions. See the file COPYING for details.
gpg:/root/.gnupg/secring.gpg: keyring created
gpg: /root/.gnupg/pubring.gpg: keyring created
Please select what kind of key you want:
(1) DSA and ElGamal (default)
(2) DSA (sign only)
(4) ElGamal (sign and encrypt)
Your selection? 1
DSA keypair will have 1024 bits.
About to generate a new ELG-E keypair.
minimum keysize is 768 bits
default keysize is 1024 bits
highest suggested keysize is 2048 bits
What keysize do you want? (1024) 2048
Do you really need such a large keysize? y
Requested keysize is 2048 bits
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n> w = key expires in n weeks
<n> m = key expires in n months
<n> y = key expires in n years
Key is valid for? (0) 0
Key does not expire at all
Is this correct (y/n)? y
You need a User-ID to identify your key; the software constructs the
user id
from Real Name, Comment and Email Address in this form:
"
Real name: Kapil sharma
Email address: kapil@linux4biz.net
Comment: Unix/Linux consultant
You selected this USER-ID:
"Kapil Sharma (Unix/Linux consultant) <kapil@linux4biz.net> "
Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? o
You need a Passphrase to protect your secret key.
Enter passphrase: [enter a passphrase]
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
++++++++++.+++++^^^
public and secret key created and signed.
Now I will explain about the various inputs asked during the generation of the keypairs.
There are advantages and disadvantages of choosing a longer key.
The advantages are: 1) The longer the key the more secure it is against
brute-force attacks
The disadvantages are: 1) encryption and decryption will be slower
as the key size is increased 2) a larger keysize may affect signature length
The default keysize is adequate for almost all purpose and
the keysize can never be changed after selection.
Real name: Enter you name here
Email address: Enter you email address
Comment:
Enter
any comment here
There is no limit on the length of a passphrase, and it should be carefully
chosen. From the perspective of security, the passphrase to unlock the
private key is one of the weakest points in GnuPG
(and other public-key encryption systems as well) since it is the only
protection you have if another individual gets your private key. Ideally,
the passphrase should not use words from a
dictionary and should mix the case of alphabetic characters as well
as use non-alphabetic characters. A good passphrase is crucial to the secure
use of GnuPG.
2: Generating a revocation certificate
After your keypair is created you should immediately generate a revocation
certificate for the primary public key using the option --gen-revoke. If
you forget your passphrase or if your private
key is compromised or lost, this revocation certificate may be published
to notify others that the public key should no longer be used.
[root@dragon /]# gpg --output revoke.asc --gen-revoke mykey
Here mykey must be a key specifier, either the key ID of your primary
keypair or any part of a user ID that identifies your keypair. The generated
certificate will be left in the file
revoke.asc. The certificate should not be stored where others can access
it since anybody can publish the revocation certificate and render the
corresponding public key
useless.
3: Listing Keys
To list the keys on your public keyring use the command-line option --list-keys.
[root@dragon /]# gpg --list-keys
/root/.gnupg/pubring.gpg
------------------------
pub 1024D/020C9884 2000-11-09 Kapil Sharma (Unix/Linux consultant)
<kapil@linux4biz.net>
sub 2048g/555286CA 2000-11-09
4: Exporting a public key
You can export your public key to use it on your homepage or on a available
key server on the Internet or any other method. To send your public key
to a correspondent you must first export it. The command-line option --export
is used to do this. It takes an additional argument identifying the public
key to export.
[...]
-----END PGP PUBLIC KEY BLOCK-----
[root@dragon /]# gpg --import <filename>
Here "filename" is the name of the exported public key.
For example:
[root@dragon /]# gpg --import mandrake.asc
gpg: key :9B4A4024: public key imported
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: Total number processed: 1
gpg:
imported: 1
In the above example we imported the Public key file "mandrake.asc" from the company Mandrake Linux, downloadable from Mandrake Internet site, into our keyring.
6: Validating the key
Once a key is imported it should be validated. A key is validated
by verifying the key's fingerprint and then signing the key to certify
it as a valid key. A key's fingerprint can be quickly viewed with the --fingerprint
command-line option.
[root@dragon /]# gpg --fingerprint <UID>
As a example:
[root@dragon /]# gpg --fingerprint mandrake
pub 1024D/9B4A4024 2000-01-06 MandrakeSoft (MandrakeSoft official
keys) <mandrake@mandrakesoft.com>
Key fingerprint = 63A2 8CBD A7A8 387E 1A53
2C1E 59E7 0DEE 9B4A 4024
sub 1024g/686FF394 2000-01-06
In the above example we verified the fingerprint of mandrake. A key's fingerprint is verified with the key's owner. This may be done in person or over the phone or through any other means as long as you can guarantee that you are communicating with the key's true owner. If the fingerprint you get is the same as the fingerprint the key's owner gets, then you can be sure that you have a correct copy of the key.
7: Key Signing
After importing and verifying the keys that you have imported into
your public database, you can start signing them. Signing a key certifies
that you know the owner of the keys. You should only sign the keys when
you are 100% sure of the authentication of the key.
pub 1024D/9B4A4024 created: 2000-01-06 expires: never
trust: -/q
Fingerprint: 63A2 8CBD A7A8 387E 1A53 2C1E 59E7 0DEE 9B4A 4024
MandrakeSoft (MandrakeSoft official keys) <mandrake@mandrakesoft.com>
Are you really sure that you want to sign this key
with your key: "Kapil Sharma (Unix/Linux consultant) <kapil@linux4biz.net> "
Really sign? y
You need a passphrase to unlock the secret key for
user: "Kapil Sharma (Unix/Linux consultant) <kapil@linux4biz.net> "
1024-bit DSA key, ID 020C9884, created 2000-11-09
Enter passphrase:
9: Encrypting and decrypting
The procedure for encrypting and decrypting documents is very simple.
If you want to encrypt a message to mandrake, you encrypt it using mandrake
public key, and then only mandrake can
decrypt that file with his private key. If Mandrake wants to
send you a message, it encrypts it using your public key, and you
decrypt it with your private key.
To encrypt and sign data for the user Mandrake that we have added on
our keyring use the following command (You must have a public key of the
recipient):
[root@dragon /]# gpg -sear <UID of the public key> <file>
As an example:
[root@dragon /]# gpg -sear Mandrake document.txt
You need a passphrase to unlock the secret key for
user: "Kapil Sharma (Unix/Linux consultant) <kapil@linux4biz.net> "
1024-bit DSA key, ID 020C9884, created 2000-11-09
Enter passphrase:
Here "s" is for signing , "e" for encrypting, "a" to create ASCII armored output (".asc" is ready for sending by mail), "r" to encrypt the user id name and <file> is the data you want to encrypt
[root@dragon /]# gpg -d <file>
As an example:
[root@dragon /]# gpg -d documentforkapil.asc
You need a passphrase to unlock the secret key for
user: "Kapil Sharma (Unix/Linux consultant) <kapil@linux4biz.net> "
1024-bit DSA key, ID 020C9884, created 2000-11-09
Enter passphrase:
Here the parameter "d" is for decrypting the data and <file> is a
data you want to decrypt.
[Note: you must have the public key of the sender of the message/data
that you want to decrypt in your public keyring database.]
10: Checking the signature
Once you have extracted your public key and exported it then by using
the --verify option of GnuPG anybody can check whether encrypted data from
you is also signed by you.
Some uses of GnuPG software
1: Send encrypted mail messages.
2: Encrypt files and documents
3: Transmit encrypted files and important documents through network
Here is a list of some of the Frontend and software for GnuPG
GPA aims to be the standard
GnuPG graphical frontend. This has a very nice GUI interface.
GnomePGP
is a GNOME desktop tool to control GnuPG.
Geheimniss is a KDE frontend
for GnuPG.
pgp4pine is a Pine filter to
handle PGP messages.
MagicPGP is yet
another set of scripts to use GnuPG with Pine.
PinePGP
is also a Pine filter for GnuPG.
More Information
http://www.gnupg.org/docs.html
Conclusion
Anybody who is cautious about security must use GnuPG. It is one of the best open-source programs which has all the functions for encryption and decryption for all your secure data and can be used without any restrictions since it is under GNU General Public License. It can be used to send encrypted mail messages, files and documents for security. It can also be used to transmit files and important documents through network securely.
I published an article in the September issue of Linux Gazette (LG #57) titled Making a Simple Linux Network Including Windows 9x. I received questions regarding my encrypted Windows partition. People asked me questions like. "How did you do that?" So I'd like to answer, "how did I do that?" I would also like to describe my successful configuration of sendmail, which remained open in my previous article.
The above-mentioned article was about how to configure simple network including Windows 9x, but I was at that time unsuccessful with configuration of sendmail. First, let me say that I was not interested to have a standard mail server--one server from which I would fetch mail. I was interested to configure sendmail to have a possibility to send mail from machine one to machine two, and from machine two to machine one. This is something not very usual; however, the information revealed here may also be useful for such a standard sendmail server configuration.
I am using a term "sendmail configuration", by which I do not mean "configuration of sendmail.cf file", but rather "making sendmail work". In other texts of Linux documentation files the term "sendmail configuration" is understood as manipulation of sendmail configuration files in /etc directory.
The following article will briefly describe how I configured this and how I successfully shared an encrypted Windows partition with Linux.
Normally, I use Linux at home, so I did not give my Linux workstation a network name - a host name. I found most of the programs people recommended me in their answers as ineffective (webadmin, configure sendmail). This was obviously due to the following reasons including the fact I must strongly emphasize here usually, sendmail is preconfigured and no editing of its configuration file (sendmail.cf) is necessary unless you want to do something special or at least something of your particular choice:
1. The first important thing was to give my Linux a host name. I did this with a "hostname one.juro.sk" command, where "one.juro.sk" may be a name for your machine. If you do not have a real network name, it does not matter. Just use the above-mentioned name and replace my name with your name, e.g. one.frank.com. The article in September issue clearly describes how to configure your network, so look there. The information in the article you now read will also apply to configuring sendmail in the plip network. You can open Linuxconf (RedHat) and change permanently your
hostname > Basic sendmail configuration > present your system as: one.juro.skYou should also do this on the computer TWO, where you will put two.juro.sk instead of one.juro.sk.
2. The file sendmail.cw in /etc directory must contain a line with the following text: one.juro.sk in computer ONE, and two.juro.sk in computer TWO. The sendmail.cw file is preconfigured as empty and it only contains the following commented text: # sendmail.cw - include all aliases for your machine here.
3. DNS must be configured. DNS files are contained in the bind package. Just install bind and change its configuration files in /etc directory. Here I will give my DNS configuration files:
/etc/named.boot ; ; a caching only nameserver config ; directory /etc/namedb cache . root.cache primary 0.0.127.in-addr.arpa named.localThe content of my /etc/named.conf file is different from the standard Linux configuration. I changed it because I use FreeBSD and I backup the /etc directory regularly. For me it is more convenient to have all configuration files in /etc rather than few in /var and the rest in /etc directory, but this is a matter of your choice. The file root.cache contains the world root DNS servers and it is preconfigured, so I do not include its content here. You will only make use of this file if you are connected to the net. However, if you are not connected, it's OK to leave it as it is. I noticed the file does not make any interference with our configuration.
options { directory "/etc/namedb"; }; zone "." { type hint; file "root.cache"; }; zone "0.0.127.in-addr.arpa"{ type master; file "named.local"; }; zone "juro.sk"{ type master; file "juro.sk"; }; zone "0.0.10.IN-ADDR.ARPA"{ type master; file "10.0.0"; };
$TTL 3600 @ IN SOA one.juro.sk. root.one.juro.sk. ( 20000827 ; serial 3600 ; refresh 900 ; retry 3600000 ; expire 3600 ) ; Minimum IN NS one.juro.sk. 1 IN PTR one.juro.sk.The periods at the end are not a mistake; they are important here to keep (one.juro.sk.) You can find more information in the DNS-HOWTO. If you don't understand something, just forget it and feel fine with my assurance that this DNS configuration will work.
$TTL 3600 @ IN SOA one.juro.sk. root.one.juro.sk. ( 2000080801 ; serial 3600 ; refresh 900 ; retry 1209600 ; expire 43200 ; default_ttl ) IN NS one.juro.sk. IN MX 0 one.juro.sk. localhost. IN A 127.0.0.1 ;info on particular computers ns IN A 10.0.0.1 one IN A 10.0.0.1 www CNAME one ftp CNAME one two IN A 10.0.0.2MX is a mail exchanger. NS is a nameserver, CNAME is a canonical name or alias. Now follows the reverse zone:
$TTL 3600 @ IN SOA one.juro.sk. root.one.juro.sk. ( 1997022700 ; serial 28800 ; refresh 14400 ; retry 3600000 ; expire 86400 ; default_ttl ) IN NS one.juro.sk. 1 IN PTR one.juro.sk. 2 IN PTR two.juro.sk. ; the above PTR is reverse mappingSOA means Start of Authority, notice ";" at the beginning of some lines; it is used as a comment. The numbers represent time in seconds.
Now you can issue a command "ndc start". If your DNS (BIND) is already running, try "ndc restart". You can try the nslookup command, which should answer your queries, for example, issue nslookup. The shell command line will change and you will see something like this:
$ nslookup Default Name Server: one.juro.sk Address: 127.0.0.1
Now you can put 10.0.0.2 in the ndc command window and you should receive a feedback that the computer you are asking for is two.juro.sk. If you put 10.0.0.1, the reply will be one.juro.sk.
No DNS server should be running on the other computer (TWO). This is a detail, but newbies often configure DNS server on more machines. In our network connection we have one DNS server and don't worry with the Secondary DNS server. We're dealing here with a SIMPLE NETWORK. It's the only way to start understanding something more complicated.
4. Putting the "domain juro.sk" in the resolv.conf file will tell the second computer (and all other ones, if we plan to include them into our network) about the domain we are in (juro.sk, frank.com, or planet.ru, it's your choice, but keep only one domain. There's a possibility to create more domains. This is something like "Workgroups" in MS Windows and only computers in one domain [Workgroup] will be able to communicate with one another, i.e. computers in the domain "juro.sk" will communicate with one another; if you have computers in the "frank.com" domain in the same network, "frank.com" computers will not communicate with computers in "juro.sk" domain, albeit they all are cabled into one network). And because we are using the private IP addresses here, there will be no interference with Internet. Our DNS server will simply translate one.juro.sk (or 1.frank.com) as 10.0.0.1. (However, for Internet connection you need a router, if you want to use any of the networked computers for dialing out. The router gives you a possibility to share one modem with several computers. If you have a simple network with two or three computers and need to make an immediate dial out connection, try to dial out from the DNS server. A router is a computer that serves as a gateway - a way out of the private Intranet. Please look for information elsewhere, or else download a freesco mini dialout router and install it; it's a preconfigured mini router with diald I tested both from Windows and Linux and which worked well. You will only need to configure your ISP. Find the software through search engines, freesco should also be on http://freshmeat.net, it's a diskette mini distribution, so an old 386 without a hard disk might serve you good).
The computer TWO will read the DNS configuration from the computer ONE. So the 10.0.0.1 is the address of the computer ONE (and 10.0.0.2 of the computer TWO). The resolv.conf on the computer ONE has the following syntax:
domain juro.sk nameserver 127.0.0.1 nameserver 10.0.0.1 # (this is maybe not necessary, but I have it there)The resolv.conf on the computer TWO needs this:
domain juro.sk nameserver 10.0.0.1
Again, read my article from the September issue on how to configure the simple network. If you have a working network and the above-mentioned configuration ready, you will be able to send mails from root or user accounts either from computer ONE to computer TWO, or from computer TWO to computer ONE. If you connect to the net, the DNS name server we just configured will show you all IP addresses of addresses like www.linuxgazette.com. So when you execute a command nslookup and type any www address in the command line, you will get its numerical IP address. This information will be given to you through these root DNS servers we mentioned above.
If there is anything wrong, try to run "ndc restart". If there is still a problem, check your network connection.
I haven't tested it yet, but it will certainly work. However, you must install a Windows mail server like sendmail in Linux. One alternative how to do this is to try some freeware or to use a professional software like Winroute, which has a mail server, DHCP server, etc. (Winroute for MS Windows can also be used as a dial-up router). Here it will be DNS that will help you send mail. Let me repeat the most important information I have from this hard digging - no editing of sendmail.cf file is necessary. The sendmail configuration file is preconfigured to work immediately.
Some five years ago I downloaded the PCGuardian Encryption Engine (www.pcguardian.com) and used it. Although it is a shareware with expiration, I managed to delete my C: Drive several times, so I could install it even after it was already installed. Please understand that everything you do here like I did will be done at your own risk.
The PCGuardian Encryption Engine will totally encrypt a DOS FAT16 or WINDOWS FAT32 partition and you will have to enter to your system through a password. If you use a diskette and look in the drive C:, you will see a garbage. If you later want to delete the encrypted partition, the DOS fdisk will refuse it, but not Linux fdisk or cfdisk.
Here the problem is, if you have a boot manager, that you must use such a boot manager that would not interfere with the password boot manager. This is quite a complicated issue, but generally speaking, the password engine of PCGuardian software behaves like a boot manager in that it is installed in MBR. I used the BOSS boot manager from FreeBSD distribution disks. BOSS was installed first and the PCGuardian password manager did not damage the BOSS boot manager, or the MBR. This means that first I received a password invitation, then the BOSS boot manager and then I could easily boot the encrypted Windows partition or Linux. When I selected the "Restart in MS-DOS Mode" from the Windows partition, I could also use the loadlin.exe file to boot Linux from the encrypted partition, however, the Linux partition was obviously on a different disk. Other boot managers will not work with PCGuardian or other encryption "MBR password" managers. This means that you will either destroy the MBR (for example, Boot Manager Menu, which also destroyed my whole encrypted disk), or all data on the disk. So far I can say that GAG boot manager also may work. You can download GAG from http://www.arrakis.es/~scostas/SOFTWARE/GAG/gageng.htm It is probably the best boot manager and it is free. If you want to download BOSS, follow ftp links from www.freebsd.org. Having two MBR codes is a very dangerous thing. The best thing is not to try it. Obviously, you cannot mount such an encrypted Windows partition from Linux unless the manufacturer gave you a driver.
(Person new) sing.
(Person new) sing: 'MaryHadALittleLamb'.
(Person new) sing: 'MaryHadALittleLamb' andDoIt: 'loudly'.
(Person new) sing: 'MaryHadALittleLamb' andDoIt: 'quietly'.
(Person new) whatIsMyHeight.
Pretty easy stuff eh?2
Notice how the Smalltalk code is very readable and is very similar to how
I initially wrote the questions in English. Each one of these requests
would be what we Smalltalkers call a message
that the Person responds to, and the method in which they respond is determined
by what we Smalltalkers call a method.
Again, pretty easy and intuitive stuff.
Note on the last message, I switched the perspective
around to whatIsMyHeight as opposed to whatIsYourHeight.
We could easily have made a method called whatIsYourHeight, but
it's common practice to name methods from the perspective of the object3.
Now, you'll notice that each request has (Person
new) in it; you'd be correct in assuming we're asking a 5 different
people to do something - we're asking a new Person to do something each
time. What if we want to ask the same person to do everything?
There's a few ways we could do this, one of them is:
| aPerson |
aPerson := (Person new).
aPerson sing.
aPerson sing: 'MaryHadALittleLamb'.
aPerson sing: 'MaryHadALittleLamb' andDoIt: 'loudly'.
aPerson sing: 'MaryHadALittleLamb' andDoIt: 'quietly'.
aPerson whatIsMyHeight.
The first line is declaring a temporary variable.
Hmm, this is the first traditional computer term that we've used so far
in our discussion, not too bad. Since we don't have a name for the
person, we'll just call the person aPerson. Much better than
'x', 'y', or 'i' that you often see in other programming
languages. Not that you couldn't call a variable x in Smalltalk,
it's just that you're encouranged to name things descriptively. The
common convention is to run your words together with capitalizing each
successive word (IMHO, this includes acronyms too). For example,
you could ask the Person to runToTheDmv. So in the above code
snippet, we're creating a new person and assigning (:=) that person
to a temporary variable called aPerson. Then we're asking
aPerson to perform their various methods by sending them messages.
So the question naturally arises, what is 'Person'?
Thinking in terms of nouns, a Person is a specific class or subset of nouns.
Well, in Smalltalk Person is an object too, but it's a special kind
of object that is called a class.
You can think of a class as a blueprint object for making related objects.
When we ask a class to make a new instance of an object, it's called instantiating
an object. Now, coming back to the properties of an object, they
are stored in what are called instance variables
of the object4. When we were
asking aPerson for their height, they probably responded with what
they had stored in their instance variable (we don't know for sure, as
we don't know how the person determines their height).
Revisting our conception of what an object is, we
can now refine it: an object is a grouping of messages and data
that its messages can operate on. This brings us to our next
subject: Encapsulation.
(Person new) sing.
(Person new) sing: 'MaryHadALittleLamb'.
(Person new) sing: 'MaryHadALittleLamb' andDoIt: 'loudly'.
(Person new) sing: 'MaryHadALittleLamb' andDoIt: 'quietly'.
(Person new) whatIsMyHeight.
"Now print this, and you'll see a very large number as the result, since
it's 1067 digits long, I'm not going to paste it in here. Note,
this takes 5.9 seconds to run on my P200, which is pretty respectable performance.
Also note the size of the numbers you can work
with - you don't have the usual predefined fixed limits such as an int
that has the range from -2,147,483,648 to 2,147,483,647."
1000 factorial
"If you want to and have the time, just for grins try 10000 factorial (I didn't have the patience to run this on my machine, even in another thread)"
"For the curious, no I didn't count the number of digits returned from
1000 factorial, since the message factorial returns a LargeInteger, we
can just ask that LargeInteger what size it is."
(1000 factorial) size
"If you want to check that the correct numbers are actually being computed,
try this and it should give you an answer of 1000"
1000 factorial // 999 factorial
"Looking for what kind of precision you can get? Try:"
123443/5432
"The interesting thing you'll note is that it returns a Fraction!
No rounding off to the first 5 decimal places by default. Instead
of printing it, try inspecting this guy, you'll see a Fraction object,
with a numerator and denominator just as you'd expect:"
"Of course, you can use floats too, in which case you do get a rounding
off - to 14 places give or take depending on the flavour of Smalltalk you're
using. Try this, and you'll get the answer: 22.72426793416332"
123443.45/5432.23
"Finally, for those curious about how long things take, to time something
in Smalltalk you can print this, which will print out the milliseconds
it took to run. These measurements are not even meant to be toy
benchmarks, but are just presented for interest."
Time millisecondsToRun: [100 factorial]
Time millisecondsToRun: [1000 factorial]
"On my P200, the above lines took:
0.020 seconds
5.967 seconds"
People with some programming experience will notice that we didn't have to fuss with what types of numbers we're working with, (integers, large integers, floats, large floats), or type mismatches, or predefined size limitations, or wrapping primitive types in objects then unwrapping them or any other of this type of nonsense ;-). We just naturally typed in what we wanted to do without having to do any jumping through hoops for the sake of the computer. This comes from the power of P&P: Pure objects and Polymorphism (which we'll discuss next time).
Q: Can you show how your examples can be done in Java?
I'll try and answer this without getting on a soapbox (language questions
are often equivalent to religous questions). There's three parts
to this answer:
Q: What is a skin?
A: A skin is an installable look-n-feel or theme. In squeak you
can install a Windoze look-n-feel, MacOS Aqua look-n-feel, etc. (not
sure how many skins are out there or what state they're in). I remember
VisualWorks Smalltalk having the skins concept back in '94 (wasn't called
a skin back then)- it's one of the things about Smalltalk that first caught
my eye. At the time I had just spent a year doing a very painful
port of OpenWindows to Motif for parts of a C based application, then I
stolled past a coworker's desk and they showed me how they could switch
the look-n-feel of their Smalltalk application from Windoze to Motif to
MacOS with a click of the mouse. Talk about a productivity boost!
Q: Can you have Smalltalk run in web browsers?
A: You certainly can, in fact I thought about setting up a Squeaklet
that people could execute the snippets from this series in from the comfort
of their web browsers... yeah, you can have a development environment in
your web browser, not just runtime code. However, it was just one
more thing for me to do in my limited time and I decided to forgo it for
now. This is a possible future topic. BTW - most flavours of
Smalltalk have some mechanism to run thin clients in a web browser.
Q: Where is the 'main' function?
A: Smalltalk doesn't have a 'main' function, this can be confusing
to Smalltalk newbies as so many other languages have this notion.
Conceptually, Smalltalk is an always running set of live objects which
is why there is no 'main' function - if your enviroment is always running,
having a 'main' function is nonsensical as you're not starting or ending
anywhere. When you want to start an application you've written, you
merely ask it to open up its window that it uses as a starting point.
When you deliver an application, you merely open up your application's
starting window and package your environment (this is a simplification
here).
Realistically though, you have to have some starting
point as you need to shut down your computer sometimes. Well, Smalltalk
does what is called saving an image. It's called an image because
what you're saving is really a snapshot in time of your environment.
When you start it up again, everything is exactly where you left it.
To do this, Smalltalk has some bootstrap code to get itself going again,
which could technically be considered a 'main' function. However,
the point is that you do not have a 'main' function when writing an application.
"This is a Class definition"
Object subclass: #Person
instanceVariableNames: ''
classVariableNames: ''
poolDictionaries: ''
category: 'MakingSmalltalk-Article2'
"My Characteristics is a category of methods for the class (similar
to an interface in Java (but it's not enforced))"
Person methodsFor: 'My Characteristics'"The 1 method in the My Characteristics category"
whatIsMyHeight"This is the singing category, it has 6 methods"
"Actually, in practice we'd probably just name this method 'height', with the 'whatIsMy' implied.
Simple example to show how a query about my characteristic can be done. Ah-ha - notice that the height is not being returned via an instance variable as we guessed at above, but is in fact hardcoded... A BAD PRACTICE TO DO, but is fine for this example to keep things simple, and wanted to show how to do a ' in a string"(Workspace new contents: 'My height is 5'' 7"') openLabel: 'This is my height'! !
Person methodsFor: 'Singing'"And the methods for singing - method 1 of 6"
maryHadALittleLambLyrics"singing method 2 of 6, we use the 'my' prefix convention to indicate a private method"^'Mary had a little lamb, little lamb, little lamb, Mary had a little lamb whose fleece was white as snow.'
mySing: someLyrics inManner: anAdjective withTitle: aTitle"singing method 3 of 6"
"Using simple logic here for illustrative purposes - if the adjective is not 'loudly' or 'quietly' just ignore how we're being asked to sing"| tmpLyrics |
anAdjective = 'loudly'
ifTrue: [tmpLyrics := someLyrics asUppercase].
anAdjective = 'quietly'
ifTrue: [tmpLyrics := someLyrics asLowercase].
self mySing: tmpLyrics withTitle: aTitle
mySing: someLyrics withTitle: aTitle"singing method 4 of 6"(Workspace new contents: someLyrics) openLabel: aTitle
sing"singing method 5 of 6"self mySing: 'Do be do be doooooo.' withTitle: 'A bad impression of Sinatra'
sing: aSong"singing method 6 of 6"aSong = 'MaryHadALittleLamb'
ifTrue: [self mySing: self maryHadALittleLambLyrics withTitle: 'Mary had a little lamb']
ifFalse: [self sing].
sing: aSong andDoIt: anAdjectiveaSong = 'MaryHadALittleLamb'
ifTrue: [self mySing: self maryHadALittleLambLyrics inManner: anAdjective withTitle: 'Mary had a little lamb']
ifFalse: [self sing].
The image on the right links to Steffler's site.
NPR had a story this month about the penguins arriving on the beaches in Rio de Janeiro and other parts of Brazil. Usually there are five or so penguins, but this year there are hundreds of penguins who used to stick around the Falklands/Malvinas Islands but have now migrated to Brazil.
Scientists suspect maybe it's a long-term climactic change, and the cold ocean currents they follow to find their food may have shifted.
Some Brazilians have adopted penguins as pets, but many don't know how to care for them. The penguins don't do well when the weather turns hot, so some ppl put them in the freezer. Unfortunately, this gives the penguins hypothermia, because this variety is used to a temperate environment. One of Brazil's zoos is building a penguin rehabilitation center for the penguins it has acquired and the ex-pets who have been donated to it.
Can any readers in Brazil provide any more information?
In January, Linux Journal published a short interview article about an oil spil near Australia's Phillip Island where fairy penguins (aka little penguins, the kind that bit Linus) live, and how hackers (including LJ) sent money to help rehabilitate the birds. The article describes how part of the rehabilitation included sweaters for the birds, made apparently from socks.
http://www.penguins.org.au has pictures and whimsical drawings of penguins, and panels from the High Tide comic strip featuring penguins. The site features information about the Phillip Island Nature Park.
[Nov 30, 3:45pm] As I write, the N30 WTO II demonstrations have started in downtown Seattle. "Several hundred" protesters (600-800 according to the news) are marching to Westlake Park from separate demonstrations on Capitol Hill and the International District. The most ingenious of their plans is a giant cake they'll be presenting to Mayor Paul Schell and booster Pat Davis, to thank them for bringing the WTO trade ministerial last year.... so they [the activists] could expose what bastards they [the WTO] are. In other matters, nine Starbucks were hit yesterday and the day before.... Anyway, you'll know by the time you read this whether news about N30 makes it outside the region.
Michael Orr
Editor, Linux Gazette, gazette@ssc.com