From J. David Peet on Thu, 30 Mar 2000
Hi,
I just ran across your article
www.linuxdoc.org/LDP/LG/issue50/tag/26.html that talks a (tiny) bit about Win4Lin.
FYI, Win4Lin is now available. And if you are interested, the full documentation is on-line on the TreLOS web site. www.trelos.com. You can also order it via this web site.
In case you did not know, the Win4Lin technology has a long history as "Merge" for SCO Unix. SCO has been an OEM of our Merge technology for years. Win4Lin is the Linux version of the existing current technology.
I didn't know that. I thought DOS/MERGE was from a company called "Locus" or something like that.
One minor point -- Win4Lin is not a "clone" of VMWare as such. They both provide a virtual machine to run Windows in on Linux, but there are significant differences. Refer to the new "white-paper" document: http://www.trelos.com/trelos/Trelos/Products/Win4Lin_Whitepaper.htm Near then end are two paragraphs that compare and contrast Win4Lin WINE and VMWare.
-Thanks -David Peet david.peet@trelos.com
I probably shouldn't have used the word "clone" --- though it isn't all that precise. Obviously, in light of Win4Lin's heritage it might be more appropriate to say that VMWare is a "clone" of Win4Lin's predecessor. MERGE is the grandaddy of MS-DOS emulators for UNIX.
Anyway, I'll let people make up their own mind based on their own reading and experience.
I haven't actually used any DOS or MS Windows software in years (only the occasional, blessedly brief trifle to help someone out here or there). So even if you were to send a copy to me for my evaluation I can't promise that I'd ever get around to trying it. (I think I have a VMWare CD around here somewhere -- an eval copy or some such). Heather, my editor and wife, still uses MS-Windows occasionally. I know she's installed DOSEMU, and WINE and used them a bit (DOSemu extensively). I've installed and played with DOSemu (helped someone with it at an installfest a couple weeks ago, too). However, I've never even tried WINE!
Anyway, good luck on you're new release.
Answered By J. David Peet on Thu, 30 Mar 2000
Jim Dennis wrote:
In case you did not know, the Win4Lin technology has a long history as "Merge" for SCO Unix. SCO has been an OEM of our Merge technology for years. Win4Lin is the Linux version of the existing current technology.
I didn't know that. I thought DOS/MERGE was from a company called "Locus" or something like that.
Yes, I was there at Locus at the very start of Merge. It's been a long path since then with some odd twists. First Locus merged with Platinum, and Merge continued to be developed, including the current SCO Merge 4 version with win95 support. Then right before CA digested Platinum, a company in Santa Cruz, DASCOM, bought (rescued!) the Merge technology out from Platinum and hired some of us old-time Merge developers to form a company named "TreLOS" to take the technology forward including porting it to Linux. (Insert danger music here.) Then before TreLOS could be spun off as it's own company, IBM bought DASCOM, for reasons having nothing at all to do with Merge/TreLOS. Then in February IBM finished spinning TreLOS off as it's own company. We are currently a (very small) privately held company with NO affiliation with IBM and NO IBM technology. (IBM for some reasons wanted that to be clear.) Once we escaped from IBM it took a bit more than a month to set up the infrastructure to be able to release the product. It was getting caught up in the IBM acqusition of DASCOM that prevented us from releasing the product last fall as we had originally planned. The Win4Lin 1.0 product has actually been ready for months now. All that time was not completely wasted because IBM let us have an extended semi-secret beta program so it's actually been in real use for quite a while for a "1.0" version product.
So that's the history to this point. Perhaps more than you wanted to know.
... Anyway, good luck on your new release.
-Thanks -David
P.S. Now that we are launching Win4Lin 1.0, having reviews done is a Good Thing. So if you or Heather would like to do a review of it that is extremely easy to arrange.
From Tim Moss on Thu, 30 Mar 2000
I'm trying to extract a block of text from a file using just bash and standard shell utilities (no perl, awk, sed, etc). I have a definitive pattern that can denote the start and end or I can easily get the line numbers that denote the start and end of the block of text I'm interested in (which, by the way, I don't know ahead of time. I only know where it is in the file). I can't find a utility or command that will extract everything that falls between those points. Does such a thing exist?
Thanks
awk and sed are considered to be "standard shell utilities." (They are part of the POSIX specification).
The sed expression is simply:
sed -n "$begin,${end}p" ...
... if begin and end are line numbers.
For patterns it's easier to use awk:
awk "/$begin/,/$end/" ...
... Note: begin and end are regexes and should be chosen carefully!
However, since you don't want to do it the easy way, here are some alternatives:
------------------ WARNING: very long -------------------------
If it is a text file and you just want some lines out of it try something like:
#!/bin/sh # shextract.sh # extract part of a file between a # pair of globbing patterns [ "$#" -eq "2" ] || { echo "Must supply begin and end patterns" >&2 exit 1 } begin=$1 end=$2 of="" ## output flag while read a; do case "$a" in "$begin") of="true";; "$end") of="";; esac [ -n "$of" ] && echo $a done exit 0
... this uses no external utilities except for the test command ('[') and possibly the 'echo' command from VERY old versions of Bourne sh. It should be supported under any Bourne shell derivative. Under bash these are builtin commands.
It takes two parameters. These are "globbing" patterns NOT regular expressions. They should be quoted, especially if they contain shell wildcards (?, *, and [...] expressions).
Read any good shell programming reference (or even the rather weak 'case...esac' section of the bash man page) for details on the acceptable pattern syntax. Note because of the way I'm using this you could invoke this program (let's call it shextract, for "shell extraction") like so:
shextract "[bB]egin|[Ss]tart" "[Ee]nd|[Ss]top"
... to extract the lines between the any occurrence of the term "begin" or "Begin" or "start" or "Start" and the any subsequent occurence of "end" or "End" or "stop" or "Stop."
Notice that I can use the (quoted) pipe symbol in this context to show "alternation" (similar to the egrep use of the same token).
This script could be easily modified to use regex's instead of glob patterns (though we'd either have to use 'grep' for that or rely on a much newer shell such as ksh '93 or bash v. 2.x to do so).
This particular version will extract all regions of the file that lie between our begin and end tokens.
To stop after the first we have to insert a "break" statement into our "$end") ...;;; case. To support an "nth" occurence of the pattern we'd have to use an additional argument. To cope with degenerate input (cases where the begin and end tokens might be out of order, nested or overlapped) we'd have to do considerably more work.
As written this example requires exactly two arguments. It will only process input from stdin and only write to stdout. We could easily add code to handle more arguments (first two are patterns, 'shift'ed out rest are input file names) and some options switches (for output file, only one extraction per file, emit errors if end pattern is found before start pattern, emit warnings if no begin or subsequent end pattern is found on any input file, stop processing on any error/warning, etc).
Note: my exit 0 may seem superfluous here. However, it does prevent the shell from noting that the program "exited with non-zero return value" or warnings to that effect. That's due to my use of test ('[') on my output flag in my loop. In the normal case that will have left a non-zero return value since my of flag will be zero length for the part of the file AFTER the end pattern was found.
Note: this program is SLOW. (That's what you get for asking for it in sh). Running it on my 38,000 line /usr/share/games/hangman-words (this laptop doesn't have /usr/dict/words) it takes about 30 seconds or roughly only 1000 lines per second on a P166 with 16Mb of RAM. A binary can do better than that under MS-DOS on a 4Mhz XT!
BUG: If any lines begin with - (dashes) then your version of echo might try to treat the beginnings of your lines as arguments. This might cause the echo command to parse the rest of the line for escape sequences. If you have printf(1) evailable (as a built-in to your shell or as an external command) then you might want to use that instead of echo.
To do this based on line numbers rather than patterns we could use something more like:
#!/bin/sh # lnextract.sh # extract part of a file between a # line numbers $1 and $2 function isnum () { case "$1" in *[^0-9]*) return 1;; esac } [ "$#" -gt "2" ] || { echo "Must supply begin and end line numbers" >&2 exit 1 } isnum "$1" || { echo "first argument (first line) must be a whole number" >&2 exit 1 } isnum "$2" || { echo "second argument (last line) must be a whole number" >&2 exit 1 } begin=$1 end=$2 [ "$begin" -le "$end" ] || { echo "begin must be less than or equal to end" >&2 exit 1 } shift 2 for i; do [ -r "$i" -a -f "$i" ] || { echo "$i should be an existing regular file" >&2 continue } ln=0 while read a ; do let ln+=1 [ "$ln" -ge "$begin" ] && echo $a [ "$ln" -lt "$end" ] || break done < "$i" done exit 0
This rather ugly little example does do quite a bit more checking than my previous one.
It checks that its first two arguments are numbers (your shell must support negated character class globs for this, ksh '88 and later, bash 1.x and 2.x, and zsh all qualify), and that the first is less than or equal to the latter. Then it shifts those out of the way so it can iterate over the rest of the arguments, extracting our interval of line from each. It checks that each file is "regular" (not a directory, socket, or device node) and readable before it tries to extract a portion of it. It will follow symlinks.
It has some of the same limitations we saw before.
In addition it won't accept it's input from stdin (although we could add that by putting the main loop into a shell function and invoking it one way if our arg count was exactly two, and differently (within our for loop) if $# is greater than two. I don't feel like doing that here --- as this message is already way too long and that example is complicated enough.
It's also possible to use a combination of 'head' and 'tail' to do this. (That's a common exercise in shell programming classes). You just use something like:
head -$end $file | tail -$(( $end - $begin ))
... note that the 'tail' command on many versions of UNIX can't handle arbitrary offsets. It can only handle the lines that fit into a fixed block size. GNU tail is somewhat more robust (and correspondingly larger and more complicated). A classic way to work around limitations on tail was to use tac (cat a file backwards, from last line to first) and head (and tac again). This might use prodigous amounts of memory or disk space (might use temporary files).
If you don't want line oriented output --- and your patterns are regular expressions, and you're willing to use grep and dd then here's a different approach:
start=$(grep -b "$begin" ... ) stop=$(( $( grep -b "$end" ... ) - $begin )) dd if="$file" skip=$begin count=$stop bs=1b
This is not a shell script, just an example. Obviously you'd have to initialize $begin, $end, and $file or use $1, $2, and $3 for them to make this into a script. Also you have to modify those grep -b commands a little bit (note my ellipses). This is because grep will be giving us too much information. It will be giving a byte offset to the beginning of each pattern match, and it will be printing the matching line, too.
We can fix this with a little work. Let's assume that we want the first occurrence of "$begin" and the last occurence of "$end" Here's the commands that will just give us the raw numbers:
grep -b "$begin" "$file" | head -1 { IFS=: read b x echo b } grep -b "$end" "$file" | tail -1 | { IFS=: read e x echo e }
... notice I just grep through head or tail to get the first or last matching line, and I use IFS to change my field separator to a ":" (which grep uses to separate the offset value from the rest of the line). I read the line into two variables (separated by the IFS character(s)), and throw away the extraneous data by simply echoing the part I wanted (the byte offset) back out of my subshell.
Note: whenever you use or see a pipe operator in a shell command or script --- you should realize that you've created an implicit subshell to handle that.
Incidentally, if your patterns might have a leading - (dash) then you'll have problems passing them to grep. You can massage the pattern a little bit by wrapping the first character with square brackets. Thus "foo" becomes "[f]oo" and "-bar" becomes "[-]bar". (grep won't consider an argument starting with [ to be a command line switch, but it will try to parse -bar as one).
This is easily done with printf and sed:
printf "%s" "$pattern" | sed -e 's/./[&]/'
... note my previous warning about 'echo' --- it's pretty permissive about arguments that start with dashes that it doesn't recognize, it'll just echo those without error. But if your pattern starts with "-e " or -n it can effect out the rest of the string is represented.
Note that GNU grep and echo DON'T seem to take the -- option that is included with some GNU utilities. This would avoid the whole issue of leading dashes since this conventionally marks the end of all switch/option parsing for them.
Of course you said you didn't want to use sed, so you've made the job harder. Not impossible, but harder. With newer shells like ksh '93 and bash 2.x we can use something like:
[${pattern:0:1}]${pattern:1}
(read any recent good book on shell programming to learn about parameter expansion).
You can use the old 'cut' utility, or 'dd' to get these substrings. Of course those are just as external to the shell as perl, awk, sed, test, expr and printf.
If you really wanted to do this last sort of thing (getting a specific size substring from a variable's value, starting from an offset in the string, using only the bash 1.x parameter expansion primitives) it could be done with a whole lot of fussing. I'd use ${#varname} to get the size, a loop to build temporary strings of ? (question mark) characters to of the right length and the ${foo#} and ${foo%} operators (stripping patterns from the left and right of variable's value respectively) to isolate my substring.
Yuck! That really is as ugly as it sounds.
Anyway. I think I've said enough on the subject for now.
I'm sure you can do what you need to. Alot of it depends on which shell you're using (not just csh vs. Bourne, but ksh '88 vs. '93 and bash v1.14 vs. 2.x, etc) and just how rigit you are about that constraint about "standard utilities"
All of the examples here (except for the ${foo:} parameter expansion) are compatible with bash 1.14.
(BTW: now that I'm really learning C --- y'all can either rest easy that I'll be laying off the sh syntax for awhile, or lay awake in fear of what I'll be writing about next month).
Here's a short GNU C program to print a set of lines between one number and another:
/* extract a portion of a file from some beginning line, to * some ending line * this functions as a filter --- it doesn't take a list * of file name arguments. */ #include <stdio.h> #include <stdlib.h> #include <errno.h> int main (int argc, char * argv[] ) { char * linestr; long begin, end, current=0; ssize_t * linelen; linelen = 0; linestr=NULL; if ( argc < 3 ) { fprintf(stderr, "Usage: %s begin end\n", argv[0]); exit(1); } begin=atol(argv[1]); if ( begin < 1 ) { fprintf(stderr, "Argument error: %s should be a number " "greater than zero\n", argv[1]); exit(1); } end=atol(argv[2]); if ( end < begin ) { fprintf(stderr, "Argument error: %s should be a number " "greater than arg[1]\n", argv[1]); exit(1); } while ( getline(&linestr, &linelen, stdin ) > -1 && (++current < end ) ) { if (current >= begin) { printf("%s", linestr); } } exit(0); return 0; }
This is about the same length as my shell version. It uses atol() rather than strtol() for the argument to number conversion. atol() (ASCII to long) is simpler, but can't convey errors back to us. However, I require values greater than zero, and GNU glibc atol() returns 0 for strings that can't be converted to longs. I also use the GNU getline() function --- which is non-standard, but much more convenient and robust than fussing with scanf(), fgets() and sscanf(), and getc() stuff.
Tim, I've copied this my Linux Gazette editor, since it's a pretty general question and a way detailed answer. Unless you have any objection it will go into my column in the next issue. The sender's e-mail address and organizational affiliation are always removed from answer guy articles unless they request otherwise.
From jashby on Sun, 02 Apr 2000
Hello ,
My name is Jason Ashby i work for a computer company and am really new to Linux i have been given the task to make a zip drive visible accross a network, it is loaded on a linux machine and i can get the AIX machine to mount it but we can not copy files to or from the zip drive on AIX could you see it within your power to tell me why .
Thanks Jason Ashby
Unfortunately your question is unclear. You don't tell me which system is supposed to be the server, what sorts systems are intended to be the clients, nor what type of filesystems will be contained on the Zip media.
"make a zip drive visible accross [sic] a network"
... presumably you mean via NFS or Samba. If the client systems are UNIX or Linux you'd use NFS, if they are MS-Windows or OS/2 you'd use Samba. (If they were Apple Macs running MacOS you'd look at the netatalk or CAP packages, and if they were old MS-DOS machines you might try installing Netware client drivers on those and mars_nwe or a commercial copy of Netware on the Linux box).
Let's assume you mean to mount the Zip disks on your Linux box, and "export" them (NFS terminology) to your AIX systems. Then you'd modify your /etc/fstab to contain an entry appropriate to mount the Zip media into your file hierarchy. Maybe you'd mount it under /mnt/zip or under /zip. (You might have multiple fstab entries to support different filesystems that you might have stored on your Zip media. In most cases you'd use msdos, or one of the other variants of Linux' MS-DOS filesystem: umsdos, vfat, or uvfat).
Then you'd edit your /etc/exports file to export that to your LAN (or to specific hosts or IP address/network patterns).
Try reading the man pages for /etc/fstab and /etc/exports and perusing the following HOWTOs:
- Zip Drive Mini-HOWTO
- http://www.linuxdoc.org/HOWTO/mini/ZIP-Drive.html
- NFS HOWTO
- http://www.linuxdoc.org/HOWTO/NFS-HOWTO.html
And the excellent new:
- Filesystems HOWTO
- http://www.linuxdoc.org/HOWTO/Filesystems-HOWTO.html
by Martin Hinner.
If that doesn't do the trick, try clarifying your question. It often helps to draw a little map (ASCII art is good!).
From David Buckley on Wed, 05 Apr 2000
I am new to linux and am wondering if there is an easy way to access my Win98 disk from within linux. i have lots of files (mp3s, etc.) that i would like to use in linux. what is the easiest way to get them?
Thanks, David Buckley
I'm guessing you're talking about accessing files that are on you local system (that you have a dual-boot installation).
In that case use the 'mount' command. For example the first partition on your first IDE drive is /dev/hda1 (under Linux). If that's your C: drive under MS-DOS/Windows then you can use a command like:
mkdir /mnt/c && mount -t vfat /dev/hda1 /mnt/c
... (as the 'root' user) to make the C: directory tree appear under /mnt/c.
Once you've done that you can use normal Linux commands and programs to access those files.
That will only mount the filesystem for that duration of that session (until your reboot or unmount it with the 'umount' command). However, you can make this process automatic by adding an entry to your /etc/fstab (filesystem table).
For more info on this read the appropriate sections of the Linux Installation & Getting Started Guide (*), the System Administrator's Guide (*) (both part of the LDP at http://www.linuxdoc.org) and the mount(8), and fstab(5) man pages with the following command:
man 8 mount; man 5 fstab
(Note, in the first case you do need to specify the manual chapter/section number, 8, since there is a mount() system call which is used by programmers, particularly for writing programs like the 'mount' command itself). When you see references to keywords in this form foo(1), it's a hint that foo is documented in that chapter of the man pages: 1 is user commands, 2 is system calls, 3 is library functions, 4 is for devices, 5 is for file formats, etc).
- *( LIGS: Chapter 4 System Administration
- http://www.linuxdoc.org/LDP/gs/node6.html#SECTION00640000000000000000
LSAG: Filesystems http://www.linuxdoc.org/LDP/sag/x1038.html )
To access your MS-DOS formatted floppies it's often easier to use the mtools commands. Look at the mtools(1) man pages for details on that.
Here are a couple of other HOWTOs to read through:
- From DOS/Windows to Linux HOWTO
- http://www.linuxdoc.org/HOWTO/DOS-Win-to-Linux-HOWTO.html
- Filesystems HOWTO
- http://www.linuxdoc.org/HOWTO/Filesystems-HOWTO.html
In general you want to look through these to find answer to most common Linux questions. As you might imagine, you've asked a very common one here). In fact it's number 4.2 in the FAQ http://www.linuxdoc.org/FAQ/Linux-FAQ-4.html#ss4.2
You can also search the Linux Gazette at:
- Full search on archive Linux Gazette Search
- http://www.linuxgazette.com/search.html
Although I can see how you might not know what terms to search on until you've covered some of the basics in the LDP guides, or any good book on Linux.
There are also ways to access your Win '9x "shares" (network accessible files, or "exported" directories) from Linux using smbfs.
From Paul Ackersviller on Wed, 05 Apr 2000
Jim,
I believe I forgot to say thanks for having written the original answer as it was. I've programmed shells for ages, but have never had occasion to use co-processes. Seeing examples of how it's done are alway a good thing.
-- Paul Ackersviller
You're welcome. I've never actually used them myself. However, I was jazzed to learn how they actually work when someone I was working with showed me an example.
Sometimes I take advantage of being "The Answer Guy" and grab any pretense to show of some need trick that I've discovered or been shown (I usually try to give credit where credit is due --- but sometimes that's pretty ambiguous and doesn't fit into the flow of what I'm typing).
Anyway, I'm a firm believer in having a full toolbox. You often won't know what tool would do the trick unless you've seen a wide enough variety of tools to recognize a nail vs. a screw and can associate one with a hammer and the other with a screwdriver.
From Ranone7 on Wed, 05 Apr 2000
At this web site http://www.linuxmall.com/product/01462.html I see the title "Red Hat Linux Deluxe for Intel" Is there a Linux for AMD out there? or can I use the above linux version with an AMD-Athlon.
Thank you
The packaging is suffering from a compromise. It's trying not to sound too technical. Red Hat Linux for Intel should work on any x86 and compatible CPUs. Note that Mandrake requires at least a Pentium (it won't work on old 486 and 386 systems).
What Red Hat Inc was trying to do which this verbiage is distiguish that box from the versions that they have available for SPARC and Alpha based systems. Eventually they'll also probably have a PowerPC package available as well.
Many other distributions are similarly available on several platforms.
Answered By Martin Pool on Thu, 06 Apr 2000
On Wed, 5 Apr 2000, Jim Dennis wrote:
>At this web site > >http://www.linuxmall.com/product/01462.html I see
>the title "Red Hat Linux Deluxe for Intel" Is there a Linux for
>AMD out there? or can I use the above linux version with an
>AMD-Athlon.
>Thank you
The packaging is suffering from a compromise. It's trying not to sound too technical. Red Hat Linux for Intel should work on any x86 and compatible CPUs. Note that Mandrake requires at least a Pentium (it won't work on old 486 and 386 systems).
Good explanation. IIRC Athlons are only supported in 2.2.something, so they'll also need a recent distribution. I guess any RedHat version on sale these days will be OK, but notably Debian slink/stable will not boot.
Thanks for that note [from one of the guys on the Linuxcare list that now receives answerguy responses].
I remember hearing about Athlon problems, but I didn't ever get the full story. I was spoiled by the fact that most x86 compatible chips really are x86 COMPATIBLE. I still don't know what the whole deal with that Athlon chip is. I'll BCC someone on this to see if he can clue me in.
Answered By David Benfell on Thu, 6 Apr 2000
The story, as I was able to piece it together, is that the problem was found and fixed in the 2.3.19 kernel. The correction had to do with Memory Type Range Register (MTRR) code. This patch was backported to, possibly the 2.2.12 kernel, and, almost certainly, the 2.2.13 kernel.
However, it still seems to have been an issue with the Mandrake 6.5 distribution, which had a 2.2.12 kernel. On the other hand, my neighbor just installed Red Hat 6.2, with, I think, a 2.2.12 kernel (but the site won't tell), on an Athlon. So I'm confused.
David Benfell[
So, if you know more about the Athlon MTRR mystery, enlighten us please!
-- Heather. ]
From Le, Dong, ALNTK on Fri, 07 Apr 2000
Hello "The Answer Guy",
My name is Dong Le. I'm quite new to Linux. Since I come from Unix world, I try to use Unix concepts to apply on Linux. Some times it works, most of the time does not.
Anyway, I have Redhat 6.1 installed on my 2 PC intel-based. I tried to use rcp to remote copy files from one PC to another. I got the error: "permission denied" from other PC. I have a file ".rhosts" setup to give permission to other PC. I use "octet format" in all of files/commands so DNS/NIS are not involved at all.
My questions are:
- Why do I have this error?
- Later on I found out that Linux is using PAM to do authentication. For rcp, it is using /etc/pam.d/rsh.conf to authenticate. However, I can not find any information about PAM modules (pam_rhosts_auth.so, for example) regarding how it works. Do you know where I can obtain information about particular PAM module?
Thanks a lot, Dong Le,
Short answer: Use ssh!
There are a few problems here. First, I've seen versions of rshd (the rsh daemon) that would not seem to accept octet addresses. More importantly many Linux distributions are configured not to respect your ~/.rhosts files.
You are correct that you have to co-ordinate your policy using the PAM if your system has the "Pluggable Authentication Modules" suite of programs installed. The configuration file would be /etc/pam.d/rsh. Here's the default that would be installed by Debian:
#%PAM-1.0 auth required pam_rhosts_auth.so auth required pam_nologin.so auth required pam_env.so account required pam_unix_acct.so session required pam_unix_session.so
Yours would be pretty similar.
In addition you might find that you need to also modify the arguments on the in.rshd line in your /etc/inetd.conf file. For example if there's a -l option it may be causing your copy of in.rshd to ignore user ~/.rhosts files. A -h option will force it to ignore the contents of any /etc/hosts.equiv file.
(The new Debian rshd package ignores these additional options and requires that you configure your policy through the /etc/pam.d/ files. I don't know if Red Hat has modified it's packages in this way for versions 6.1 or 6.2. In 6.0 I'm pretty sure that I was still able to use the command line arguments on the in.rshd entry in the /etc/inet.conf file for this.)
Of course you can use ssh as a resplacement to rsh, and have much better security as well.
From Cleary, James R. on Fri, 07 Apr 2000
Jim,
I just clean installed Redhat 6.0 on my box. I can ping the
box from another machine, but I can't telnet to it, the default configuration should provide for that, shouldn't it? Any help you'd have would be great.
Sincerely, "J.C."
When you say "you can't telnet to it" what do you mean? Does the telnet client seem to just sit there for a long time? Do you get an error message that says something like "connection refused?" Does that come back immediately, or does it take a minute or two? Are you trying to telnet to it by name, or by IP address? (That basically doesn't matter as long as you're using the same form for your ping command).
I disagree with your assertion that the "default configuration should provide for that?" Linux appeals to a much broader range of users than traditional, professionally managed UNIX systems. It is not appropriate to assume that all of your users what to be "telnet hosts" (servers or multi-user workstations). In addtional telnet is an old and basically depracated means of remote access.
(Well, it should be deprecated).
You should probably use ssh, STEL, ssltelnet, or install a Kerberos or the FreeS/WAN IPSec infrastructure to provide you with an encrypted, unspoofable, unsniffable connection between your client and your server.
Please don't respond with "but I'm behind a firewall" or "this is just my home system." Those are "head in the sand" attitudes that make for a brittle infrastructure (one little crack and the whole wall collapses).
Anyway, if you've termined that telnet is really what you need, that it matches your requirements and enforces your policies to your satisfaction, then here's some pointer to troubleshooting common failures. These also apply to ssh, STEL, etc.
You said that 'ping' is working. Assuming that you are using the commands from the same host and using the same form of addressing/naming for your 'ping' and your 'telnet' commands here are the most likely problems:
* You're session might not actually be failing. It might just be taking a very long time. Search the answer guy back issues for the phrase "double;reverse;dns" and you'll find a number of my previous explanations about a common cause of this delay (and some pointer on what to do about it) Here are a couple of them:
- Issue 45: More "Can't Telnet Around My LAN" Problems
- http://www.linuxgazette.com/issue45/tag/11.html
- Issue 38: Telnetd and pausing
- http://www.linuxgazette.com/issue38/tag/32.html
- Issue 30: tv cards and dual monitor
- http://www.linuxgazette.com/issue30/tag_tvcard.html
* You might not have the telnet daemon package installed
on your target host. It might be installed but not properly configured in /etc/inetd.conf. That should contain a line that looks something like:
telnet stream tcp nowait telnetd.telnetd /usr/sbin/tcpd /usr/sbin/in.telnetd
* You might not have inetd running. (It's the daemon, service
program, that reads the /etc/inetd.conf, listens for connections on those ports, and dispatches the various service programs that handle those services).
(An obscure possibility is that you might have something broken in your name services handling. You system would normally match service/protocol names to IP port numbers and transport layer protocols (TCP, UDP, etc) using the /etc/services file. If that's corrupted, or if your /etc/nsswitch.conf is pointing your NSS libraries to query some really bogus and corrupted backend it would be possible that inetd would end up listening to the wrong ports for many services. I've never seen anyone mess that up -- but I'm sure it's possible).
* There may be a firewall or packet filtering system between
your client and your target. That might let ICMP ('ping' traffic) through while blocking your TCP ('telnet' on port 23) traffic.
* It's possible that you're telnet client program, or one
of the client libraries is broken, or that you have some degenerate values in your environment or even in your own .telnetrc file. The 'telnet' client exchanges a number of key environment variables with the daemon to which it connects. This is to configure your terminal type, set your username and your DISPLAY values, your timezone, and some other stuff. It's possibly (though unlikely) that you could be tripping over something that the 'in.telnetd' on your target really doesn't like).
Hopefully that will help.
When asking about these sorts of problems it's important to be quite specific about the failure mode (the symptoms). It is VERY important to capture and quote any error messages that you get and to explain exactly what command(s) you issued to elicit those symptoms.
Unfortunately crafting a good question is sometimes harder than answering them. (In fact I have managed to come across answer on many occasions while I was writing up the question I intended to post. The process or rigorously describing the problem has often led me to my own answers. Sometimes I post the message with my solution anyway).
One tip for troubleshooting this. Staring with 'ping' is a good idea. It basically eliminates a number of possible problems from the low-level "is the network card configured and is a cable plugged into it?" parts of your problem. It's also good to do a 'traceroute' to your target. This might show that your packets are being routed through some unexpected device that is filtering some of your traffic.
If you have console access to the target server (including a "carbon proxy" --- a person on the phone in front of it) then you can run (or have your proxy) run the 'tcpdump' command. This can show you the headers of every packet that comes across a given network interface. 'tcpdump' has a small language for describing the exact sorts of traffic that you want to see and filtering out all the other traffic that you don't want. If you search the LG AG archives on 'tcpdump' you should find a number of examples of how to use it. You might go for something like:
tcpdump -i eth0 -n host $YOURCLIENT and port 23
... for example. (TCP port 23 is the standard for telnet traffic).
If that doesn't work, you might consider temporarily replacing your 'in.telnetd' with an 'strace' wrapper script. Basically you just rename the in.telnetd file to in.telnetd.real and create a shell script (see below) to monitor it:
#!/bin/sh exec strace -o /root/testing/telnet.strace /usr/sbin/in.telnetd.real
I've described this process before as well. Here's a link to one of those:
- Issue 20
- http://www.linuxgazette.com/issue20/lg_answer20.html
- Issue 17
- http://www.linuxgazette.com/issue17/answer.html
(use your browswer's "search in page" -- [Alt][F] in Netscape and the / key in Lynx to search on 'strace' to find the messages I'm talking about. Those older issues were back before Heather was doing my HTML for me, and splitting each message/thread into separately HTML pages like I should have been doing all along).
That 'strace' trick is surprising handy. At Linuxcare we use it all the time, and it often helps us find missing config files, directories where files should be, files where directories should be, mangled permissions, and all sorts of things. There's another tools called 'ltrace' which gives similar, though slightly higher level information.
Using 'tcpdump' and 'strace' you can troubleshoot almost any problem in Linux. They are like the "X-Ray" machines and CAT/PET scanners for Linux tech support people. However, I don't recommend them lightly. Go through the list of common ailments that I listed first, consider using ssh instead, and then see if you need "surgical diagnostics."
From Patricia Lonergan on Fri, 07 Apr 2000
How would I find the following on the version of Unix I am using:,A (B OS type and release, node name, IP address, CPU type, CPU speed, amount of RAM, disk storage space, number of users who have ids, number of hosts known.,A (B Thanks Answer Guy
The comamnd:
uname -a
Should give you the UNIX name (Linux, SunOS, HP-UX, etc) and the kernel version/release, architecture, and some other info. (Might also include the kernel compilation date and host)
The command:
ifconfig -a
... should give the the IP address, netmask and broadcast address of each interface in the system.
The command:
hostname
... should give you the the DNS hostname that this system "thinks" it has. Looking that up via reverse DNS using a command like:
dig -x
... might be possible if you have the DNS utils package installed.
From there things start to get pretty complicated depending on which flavor of UNIX you're on, and how it's configured. (In fact there are exceptional cases where the preceding commands won't work):
I'll confine the rest of my answers to Linux.
You can get the CPU type and speed using the command:
cat /proc/cpuinfo
(assuming that your kernel is compiled with the /proc filesystem enabled and that you have /proc mounted. Those are the common case).
Linux provides a 'free' command to report on your RAM and swap availability and usage. Many UNIX systems will have the 'top' command installed. It can also provide that information (though it defaults to interactive mode --- and thus is less useful in scripts).
Any UNIX system should provide the 'mount' and 'df' commands to generate reports about what storage devices are attached and in use (mounted) and about the amound of free space available on each. Note you should track not only your free space (data blocks) but your free inodes (management data) so use both of the following commands:
df df -i
The 'mount' command will also report the filesystem types and any options (readonly, synchronous, etc) that are in effect on these. You might have to use the 'fdisk -l' command to find any unmounted filesystems (that might not be listed in your /etc/fstab file) under Linux. Solaris has a similar command called prvtoc (print volume table of contents).
Asking about number of user accounts is straightforward on a system that is just using local /etc/passwd and /etc/group files (the default). You can simply using the following:
wc -l /etc/passwd
... to get a number of local users. Note that many of these accounts are purely system accounts, used to managed the ownership and permissions on files and system directories. If you read though that file a little bit it should be obvious which ones are which. In general Linux distributions start numbering "real" users (the ones added after the system was installed) at 500 or 1000 so all of the names with a UID above that number are "real" (or were added by the system administrator).
However, it's possible (particularly in UNIX system that are installed on corporate networks) that your system(s) are using a networked account system such as NIS or NIS+. You might be able to get some idea of the number of users on such a network using the 'ypcat' command like so:
ypcat passwd | wc -l
The questions of "number of hosts known" is actually a bit silly. "Known" in what sense? Most system use DNS for mapping host names to IP addresses. Thus any Internet connected system "knows" about millions of hosts. It is possible for a sysadmin to provide the system with a special list of hosts and IP addresses using the /etc/hosts files, but this is pretty rare these days. (It's just too likely that you'll get those files out of sync with your DNS).
I suppose you should also look for commands with the letters "stat" in their name. Read the man pages for 'vmstat', 'netstat' 'lpstat' etc. Many versions of UNIX also include a 'sar' command though that isn't common on Linux. 'rpcinfo' and 'route' are other useful commands.
This whole set of questions has a "do my homework" tone to it. (particularly since it's common from a .edu domain). Keep in mind that I've just barely scratched the surface of the information that's available to a skilled sysadmin who needs to become familiar with a new machine. There are hundreds of other things to know about such a system.
Most of the information you care about it under /etc. On a Linux system there is also quite a bit under /proc (most of forms of UNIX that support /proc only but process information thereunder, while the Linux kernel uses it as an abstraction to provide for all sorts of dynamic kernel status information out to user space).
From Carlos Ferrer on Thu, 13 Apr 2000
Do you know how to connect an NT box with an OS/2 box using null modem?
Thanks, Carlos Ferrer
Yes. You plug one end of the null modem cable into a serial port on one of the boxes, and the other into a serial port on the other box. Then you install some software on each, configure and run it.
Before you ask:
NO! I don't know what NT or OS/2 native software you should use. That's your problem. I answer Linux questions. I'm the Linux Gazette Answer Guy.
So, why don't you ask the technical support from IBM and/or Microsoft. They sold you the software. They should provide the support. The Linux community gives us software, so I give away alot of support.
Meanwhile, you might have some luck with plain old MS-DOS Kermit. NT and OS/2 are supposed to support running DOS programs, and they should allow you to configure their DOS "boxes" (virtual machines, whatever) to have access to their respective serial ports. You can also get Kermit '95 which should work on Win '9x, NT, and OS/2. This is a commercial package. It is not free.
The C-Kermit for UNIX and Linux is also not free; though it can be freely downloaded and compiled. You should read its license to determine if you can use it freely or whether you are required to buy the C-Kermit book. (Of course you could support their project by buying the books regardless). There is also a G-Kermit which is GPL'd.
You can learn about Kermit at:
- Columbia University Kermit Project Home page
- http://www.columbia.edu/kermit
From James Knight on Thu, 13 Apr 2000
If I have an interactive program running on a VT, say tty1, can i temporarily "control" that VT from another, say tty2, or better yet, through a telnet connection (pts/n)?
For instance, i have naim running on tty1, I've been logging in via telnet, and killing that process, and start it again so they don't interfere with each other. Can I just pretend I'm at the console somehow, then when I logout, i'll still be connected to naim?
Thanks, Jay Knight
The easiest way to do this is to run 'screen'
Instead of starting interactive programs directly from your VT login shell, run 'screen' and start the program thereunder. Now you can "detach" the whole screen session (with up to 10 interactive programs running under it) and re-attach from any other sort of terminal login.
I do this routinely. I'm doing it now. Currently I'm working in an xterm which is 99 characters wide and 35 lines tall. Earlier I had connected to my system via ssh, and I "yanked" my 'screen' session over to that xterm (80 characters by 50 lines) using the following command:
'screen -r -d -e^]]'
... the -d option tells my new 'screen' command to look for another 'screen' session and detach it from wherever it is, the -r is to re-attach it to my current terminal or psuedo-terminal, and the -e option let's me set alternative "escape" and "quote" characters (more on that in a moment).
I've described 'screen' in previous LG issues. However, it is hard to find. For one thing the desired features are difficult to describe and the keywords that do cover it are far too general. For example, so far the keywords we've used are:
You: temporarily control VT Me: attach re-attach detach screen session yank
... see?
Anyway, here's the VERY short intro to 'screen':
First 'screen' just starts an extra shell. So, if you just type 'screen' (most distributions include 'screen') that's pretty much all you'll get. (You might get some sort of copyright or other notice). Now you can run programs as usual. The only big difference is that there is one key ([Ctrl]-[A] by default) which is not captured by 'screen'. That one "meta" key is your trigger to fire off all of 'screen"s other features. Here are a few of them (listed below as [Meta]+(key)):
[Meta] [a] -- send a literal [Meta] to the current session [Meta] [c] -- create an a additional shell session under this 'screen' [Meta] [w] -- display/list current sessions (windows) [Meta] [A] -- (upper case 'A') set this session's (window's) title [Meta] [Esc] -- go into "scrollback" and "copy" mode (keyboard cut & paste) [Meta] [Space] -- cycle to the next session [Meta] [Meta] -- switch to most recent session [Meta] []] -- (right square bracket) paste copy of "cut" buffer [Meta] [?] -- Quick help page of other keystrokes [Meta] [d] -- Detach [Meta] [S] -- (upper case 'S') split the screen/display (like 'splitvt') [Meta] [Q] -- (upper case 'Q') unsplit the screen/display [Meta] (digit) -- switch directly to session number (digit)
There are many others. There are many features to 'screen.' It is the UNIX/Linux terminal power tool. You also get the ability to share your session(s) with another user (like the old 'kibitz' package). That's very handy for doing online tutorial and tech support. You get a scrollback buffer and keyboard driven cut and paste (with 'vi' inspired keybindings, you can even search back through the current text and backscroll buffer).
Most of the URLs you see in the "Answer Guy" are pasted in from a 'lynx' session using 'screen.'
If you forget to detach, you can use the -d option (shown above) to remotely detach a session. You can use other options to select from multiple 'screen' sessions that you have detached. You can also run 'screen' commands to start up programs in their own screen windows.
Oddly enough I've even found that I occasionally start or re-attach to one 'screen' session on a remote system from within a local 'screen' session. When I do this I use the -e option to give that other (remote) screen session a different meta key. (That's what I did in the sample command up there, with the '-e^]]' setting it up so that the [Ctrl][Right Square Bracket] was the meta key for that session. I did that while I was at work. Before I left there I detached it. When I got home I re-attached it to this 'xterm' (where I'm typing right now). At first I just re-attached it with '-r' --- but then I realized that it was using my other meta key. So a detached again and use '-r^aa' to reset those to the defaults (to which I'm more accustomed).
Since I've introduced people at Linuxcare to this meme, I've found that many of them have come to view their "sessions" in a way that's similar to me. We maintain our state for weeks or months by detaching, logging out, going elsewhere (into X, out of X, from work, from home, etc), and always re-attaching to our ongoing sessions. It's a whole different way of using your computer.
So, try it. See if it does the trick for you.
From FRM on Fri, 14 Apr 2000
hi,
my sunos 4.1.4 kernel is already configed for the max 256 pty's (pseudo devices), but my users complain about running out of them often. do i need to add files to the /dev directory or recompile the kernel again...or????
any help much appreciated,
Randy A Compaq Computer Corp.
SunOS 4.1.4??? Hmm. Maybe you need an upgrade.
If 256 is the max for SunOS then I don't know what you'd do to get around that. Under Linux the max is about 2048. I suppose you could try making a bunch of additonal device nodes and re-writing/compiling a bunch of your apps to open the new group of nodes rather than the old ones.
I'd say that SunOS 4.1.4 is showing its age. You might want to consider switching to OpenBSD, NetBSD, or Linux. (Note: SunOS was a BSDish UNIX, so you might be more comfortable with it than you would be with Linux. I don't know about binary compatability for your existing applications).
(Obviously I don't know much about SunOS. I'm the LINUX Gazette Answer Guy and my experience with other forms of UNIX is too limited and crufty to help you more than that).
From Alain Toussaint on Sun, 16 Apr 2000
Hello Answerguy,
last week,i installed debian (a really base installation) on a factory
fresh disk and then set out to compile Xfree86 4.0 (i did not have X previously),it did compile and work fine and i've been using it daily with the startx command but wenesday this week,the hard disk on my mother's computer died so i set out to build a linux boot disk containing an X server so she could log in my system and continue to do her work,i then tried xdm tonight (locally on my box first),xdm loaded,took my credential but it did not open a session both as a user (alain) and as root,i looked over in the .xession-errors file but i've came to no conclusion,here's the content of the file:
> /home/alain/.xinitrc: exec: xfwm: not found > /home/alain/.xinitrc: xscreensaver: command not found > Xlib: connection to ":0.0" refused by server > Xlib: Client is not authorized to connect to Server > xrdb: Can't open display ':0' > Xlib: connection to ":0.0" refused by server > Xlib: Client is not authorized to connect to Server > xrdb: Can't open display ':0' > Xlib: connection to ":0.0" refused by server > Xlib: Client is not authorized to connect to Server > xrdb: Can't open display ':0'
the first 2 errors don't worry me much (i have xfce installed and for xscreensaver,i don't want it,since i'll install kde soon,i'm not pressed to fix the xfce script that much),but the Xlib errors worry me quite a bit,i then downloaded debian's xdm package and uncompressed it in a temporary directory to compare the content of both our /etc/X11/xdm directory (mine as well as the debian one) but i didn't find the root of the problem,could you please help me ??
Thanks a lot Alain Toussaint
Hmmm. It sounds like a problem with your .Xauthority file. You said you were using 'startx' before, and you're now trying to use 'xdm'. What happens if you go back and try 'startx' again?
'xdm' has a different way of handling the 'xauth' files (using the 'GiveConsole' and 'TakeConsole' scripts). Do a 'ps' listing and see if you have a X server with arguments like:
X :0 -auth /var/xdm/Xauthority
There's supposed to be a "GiveConsole" script that does something like:
xauth -f /var/xdm/Xauthority extract - :0 | xauth -f ~$USER/.Xauthority merge -
(Which extracts an MIT Magic Cookie or other access token from xdm's Xauthority file and merges it into the "cookie jar" of your user. This can, in principle, allow multiple accounts on a host or across a network to access the same display server).
Anyway, there are many other tricks that you can use to troubleshoot similar problems.
I sometimes will start the X server directly (bypassing 'xinit', 'startx', and 'xdm'); then switch back to one of my text mode consoles (usually when I'm doing this I slap the old & on the end of the X server's command line, if I forget then I do the old [Ctrl]+[Z], 'bg' key and command). Next I 'export DISPLAY=:0' (or :1, or whatever), and start an 'xterm &'
At that point I switch back to the X virtual console, and use the resulting 'xterm' to work more magic. I may need to run my own 'xrdb' commands to merge in my own entries into the "X resources database" (think of that as being your X server's "environment" --- a set of name/pattern and value pairs which are used by X client programs to determine their default appearance, behaviour, etc).
I might also run a number of 'xset' commands to add to my font path and play with other settings.
Doing this sort of "worm's eye" inching through the labyrinthine X initialization process will usually isolate any problems that you're having. It's playing with X enough to realize that it's going through all of these steps that's so difficult.
I presume that you already know some of that (since you've already fetched your own XFree 4.0 sources and built them). It's clear that you're not a novice. Anyway, trying looking for .Xauthority files. Allegedly if you simply delete them it opens the X server wide open. I don't know if that's still true in XFree 4.0 but it seemed to work on XFree 3.x the one time I tried it.
Good luck on that new X server. I haven't grabbed it to play with it yet. I may wait until someone has a first cut of a Debian binary posted to "woody" (the current development/experimental branch of the Debian project).
Answered By Carl Davis on Mon, 17 Apr 2000
Thanks Jim, but I have solved the mystery................
The problem was that lilo does not like multiple "append" statements in /etc/lilo.conf. I fixed this by putting all the statements on the one append line, separated by commas and of cotatement2, statement3" You may wish to add this snippet to the list of 2cent tips.
Regards
Carl Davis
-----Original Message----- From: Carl Davis Sent: Thursday, April 13, 2000 9:12 AM To: 'linux-questions-only@ssc.com' Subject: Linux
Hi Jim,
My compliments on a great column. I am running Linux (Mandrake 7) on a Celeron 466 with 128 Mb RAM. My problem is I cannot persuade Linux to recognise more than 64 Mb. I have tried adding the following to lilo.conf: append="mem=128M", to no avail. It still comes up with only 64 Mb. Various flavours of Windoze can see the full 128 Mb. Any ideas on what's going on here ?
Carl Davis
From Scott on Mon, 17 Apr 2000
Hello Answer guy,
The company I work for is going to start developing products for Linux soon. Part of my preparation for this is to find out about Linux file systems. One thing I haven't been able to find is how to find out what file system each filesystem is using. Is there a command line utility that shows this? How do I accomplish this programatically?
Here's a simple shell script that will parse the output from the 'mount' command and isolate the device name and type for each mounted filesystem:
mount | { IFS=" (,)"; while read dev x mpoint x type opts; do echo $dev $type; done }
Notice at this is one of my common "data mill loops" --- you pipe the output of some command into a 'while read ...; do' loop and do all your work in the subprocess. (When I'm teaching shell scripting one of the first points I emphasize about pipes is that a subprocess is implicitly made on one side of your pipe operator, or the other).
We also see that I'm using the variable "$x" to eat extra fields (the words "on" and "type" from 'mount"s output). Finally, I'm using the shell-special IFS (inter-field separator) shell variable to add the characters "(,)" to the list of field separators. This means that each of the mount options --- read-only vs read/write, nodev, nosuid, etc --- will be treated as a separate value. I could then, within my 'while' loop, nest a 'for' loop to process each option on each filesystem.
Creative use of IFS and these 'while read ...; do' loops can allow us to do quite a bit directly in shell without resorting to 'awk' and/or 'sed' to do simple parsing. Creative use of the 'case' command (which uses glob patterns to match shell variable values) is also useful and can replace many calls to 'grep'.
To get filesystem information from within a C program you'd use the 'statfs()' or 'fstatfs()' system calls. Read the 'statfs(2)' or 'fstatfs(2)' man pages for details. Fetch the util-linux sources and read the source code to the 'mount' and 'umount' commands for canonical examples of the use of these and related sytem calls.
Any help is appreciated!
Scott C
From Andrew T. Scott on Mon, 17 Apr 2000
Jim Dennis wrote: .....
and do all your work in the subprocess. (When I'm teaching shell scripting ...
Where can I sit in on this class?
-Andrew[
Luckily for Linuxcare, its training department has a whole bunch of people in it (wave hi, everybody!) because they've got Jim assigned to Do Cool Stuff so he's not teaching right now. To be fair, they are only one among many training providers for Linux; you can see a decent listing at http://www.lintraining.com which redirects to Linsight's directory by location and news on the subject.
-- Heather. ]
From vg24 on Tue, 18 Apr 2000
Hi Answer Guy,
I had a few small questions about my Slackware Linux Box...
> (1) How do I get applications (like xmms) to startup automatically when I > start FVWM95 with a 'startx' command? I'm hoping to achieve something > similar to the "StartUp" menu in Win98.
Normally the 'startx' command is a shell script which looks for a ~/.Xclients file. That file is normally just another shell script. It consists of a series of command that are started in the background (using the trailing '&' shell operator), and one command that is 'exec"d (started in the foreground, and used to replace the shell script's interpreter itself.
That foreground command is usually a window manager. In any event it becomes the "session manager" for the X server. When the program exits, the X server takes that as an indication that it should shutdown.
So, the answer to your question is to add the appropriate commands to the .Xclients script in your home directory.
If you are logging in via 'xdm' (a graphical login program) then it may be that your system is looking for an ~/.Xsession script instead. I usually just link to two names to one file. However, you certainly could have completely different configurations based on whether you logged in via 'xdm' or used the 'startx' command.
Of course this is just a matter of convention and local policy. As I said, 'startx' itself is often a shell script. At some sites you use 'xinit' instead of 'startx' --- and others there are different ways to launch the X server and completely different ways to start the various clients that run under it and control it.
You mentioned fvwm95. This is one of several variants of the fvwm window manager. That's a traditional window manager. It just gives you a set of menus (click on the "root window" which other windowing systems call a "wallpaper" with each of your mouse buttons to see those), and a set of window decorations (resizing bars, corners, and title bars and buttons).
In recent years the open source community has created somewhat more elaborate and "modern" graphical user environments like: KDE, GNOME, and GNUStep. These are whole suites of programs which can be combined to provide the sort of look, feel and facilities that people have come to expect from MacOS, MSWindows, etc.
If you really want something "Like the StartMenu" in Win'9x then you may want to look at KDE or GNOME. These have "panels" which provide a much closer analogue to the environment that you are used to.
(Note: It is also possible to make either of these environments look completely different than MS Windows. They both support "themes" which are collections of settings, graphics, textures, icons, even sounds, that customize the appearance and operation of a Linux GUI. For more information and some nice screen shots of the possibilities, take a look at http://www.themes.org).
> (2) I recently upgraded my kernel and filesystem binaries from a 2.034 kernel > to a 2.2.13 kernel. I have XFree86 3.3.5 installed. I also upgraded > my motherboard from an Intel P75 to an AMD K6-450. I kept the 32 Megs > of RAM the same (a SIMM). However, now I notice that Netscape (and > others?) grind my hard drive more when I attempt to open new > browsers. I'm pretty sure I'm low on memory, but since I'm low in > cash, I'd rather not invest in a DIMM. I didn't have any swap space > set up, and don't now. I actually upgraded from netscape 4.1 to 4.6. > Could this be the problem?
Hmmm. Certainly it is likely that Netscape 4.6 is taking up more memory than 4.1. However I note an inconsistency here. You say you didn't have any swap space. If that was true then your shortage of memory should have caused failures when trying to launch programs --- rather than the increased disk thrashing. I think it's likely that you actually do have some swap space. You can use the following command to find out what swap partitions and files are active on your system:
cat /proc/swaps
... which should provide a list of any swap space that is in use. Of course the 'free' command will also summarize the available and used swap space. However, the /proc/swaps "psuedo-file" (node) will actually tell you "where" the swap is located.
Get the extra memory. It's not that expensive and it is the best performance upgrade that you can make for your system.
> (3) I was running GNOME/enlightenment, but the > GNOME panel would never come up automatically. How can I get the > GNOME panel to initialize, along with the GNOME file manager, (so > I can have the cool desktop icons)?
Hmmm. I'm not much of a GNOME or KDE person. Do you have the rest of GNOME installed? enlightenment is a window manager. It was the default window manager for GNOME --- but they are separate projects. So, do you have GNOME installed? Are you starting 'gnome-session' (running it from your .Xclients/.Xsession script as described above)?
Try that. I think there are now a couple of window managers that implement the GNOME hints and APIs --- so you don't have to use enlightenment.
> (4) Lastly, I wanted to trim my syslog and wtmp files. Is there any > way I can do this? Can I just tail -30 the last 30 lines into a > new file? I think the wtmp is binary, so any ideas?
You are correct. the wtmp and utmp files are binary. They cannot be trimmed with simple shell scripts and text utilities. The utmp file shouldn't grow (by much), however the wtmp will grow without bound. However, the usual way of dealing with wmtp is to simply rename the current one, 'touch' a new one and forget about it.
That's fine for wtmp.
However, DON'T try that with the /var/log/messages or other syslog files. Those are held up. If you rename them or delete them, they continue to grow.
Yes! You read that correctly, if you remove a file while some process has it open, then you haven't freed up any disk space! That's because the 'rm' command just does an 'unlink()' system call. When the last link to a file is removed, AND THE FILE IS NOT OPEN, then the filesystem drivers perform some housekeeping to mark the associated inode(s) as available, and to add all the associated data blocks to the filesystem's "free list." If the file is still open then that housekeeping is deferred until it is closed.
So the usual way to trim syslog files (since syslog stays running all the time, and keeps it's files open under normal circumstances) is to to 'cp /dev/null' or 'echo "" > ' to truncate them. Another common practice is to remove the files and use the 'kill -HUP $(cat /var/run/syslog.pid)' command to force the syslogd to re-read its configuration file, close all its files, and re-open them.
However, none of that should be necessary. Every other general purpose distribution has some sort of log rotation scripts that are run out of 'cron.' I'm pretty sure that Patrick (Volkerding, principle architect of Slackware) didn't neglect that.
(I should point out that I haven't used Slackware in several years. Nothing against it. I just have too few machines and too little time).
Thanks for any help you can provide! Vikas Gupta
Well, I think this should nudge you in the right directions.
From Deepu Chandy Thomas on Tue, 18 Apr 2000
Sir,
I wanted to use the kermit protocol with minicom. I use rz sz for zmodem. Where do I get the files for kermit?
Regards, Deepu
Look at http://www.columbia.edu/kermit for canonical information about all the official Kermit packages, and at: http://www.columbia.edu/kermit/gkermit.html for information specifically about the GPL kermit package (which implements the file transfer protocol without the scripting, dialing and other features of C-Kermit).
(Note: C-Kermit can also be used as a 'telnet' or 'rsh' client with a scripting language, and many other useful features. It is a full featured communications package. Recent versions have even added Kerberos support!)
From Alex Brak on Fri, 14 Apr 2000
I'm having a problem with my linux box, and can't for the life of me figure out what's wrong. Here are the symptoms:
> server:~/scripts$ whoami > alex > server:~/scripts$ ls -al ./script > - -rwxr----- 1 alex home 43747 Apr 10 22:31 ./script* > server:~/scripts$ ./script > bash: ./script: No such file or directory
(note: the '*' at the tail end of the file listing is merely a symbol specifying that its an executable file. this is not part of the filename)
Technically that "file type marker" is the result of using the -F option to 'ls'
The most likely cause of this problem is the #! (shebang) line that should be at the beginning of your script. If that is incorrect then it is common to get this error, your shell is telling you that it can't find the script's interpreter.
If './script' was a binary executable thenI'd also suggest looking for missing shared libraries. In fact, it's possible that your shebang line is pointing to some interpreter (/usr/local/ksh or something) which is present, and executable, but is missing some shared library. It is even possible that a shared library depends on another, and that this is what is missing.
As you can see from the above, I'm the owner of the file in question, and have execute permission on it. The file exists. Yet bash claims the file is not there. I've tried with shells other than bash (every other shell available on my system, including csh, tcsh, ash, and zsh). I've even tried executing the command as root, to no avail.
This exact same problem has arisen before with another script I wrote. I couldn't fix it then, either.
Check your shebang line. It should read something like:
#!/bin/sh
Note: there are NO SPACES in this line. Do NOT put a space between the #! and the interpreter's name.
I'd like to also note that this problem arises intermittently: just after finishing ~/scripts/script I created another script named "test", did chmod u+x on it and it executed just fine. ~scripts/script still refuses to execute, though :( Please note that I've tried renaming the file. I've also tried moving it to another location on the directory tree. None of these have helped.
A text file without any shebang line, which is marked as executable will be executed through some magic that is dependent on the shell wfrom which it is being invoked.
I'll probaby get this wrong in the details but the process works something like this:
You issue a command. The shell tries to simply exec() it (after performing the command line parsing necessary to expand any file "globs" and replace any shell variables, command substitution operators, parameter expansion, etc). If that execution fails the shell may attempt to call it with $SHELL -c (or it might do something a bit different: that seems to be shell dependent).
Notice that the behaviour in the first case is well-defined. Linux has a binfmt_script module (usually compiled/linked statically into your kernel) which handles a properly formatted shebang line.
I have not experienced any other problems with my system that I'm aware of. Does anyone know what could be causing this, or how to fix the problem?
I'm running linux 2.2.14 on my Pentium 120, with a Slackware distribution. The file in question exists on the root partition, in an ext2 file system, which the kernel supports. If there's any other relevant information I have provided, please don't hesitate to ask.
If you were getting "operation not permitted" I'd suggest checking your mount options to see if the filesystem was mounted 'noexec' (which would be a very badd idea for your root fs). If you were getting a message like "cannot execute binary" then I'd think that maybe you had an old a.out binary and a kernel without the a.out binfmt support.
But I'm pretty sure that you're having a problem with your shebang line.
Thanks, Alex
From Alex Brak on Sun, 16 Apr 2000
Sport on. Many thanks.
Alex
-----Original Message----- From: Jim Dennis [mailto:jimd@starshine.org] Sent: Saturday, 15 April 2000 9:07 AM To: Alex Brak Cc: star@starshine.org; Linuxcare.com" >jdennis@Linuxcare.com; Linuxcare.com" >tdavey@Linuxcare.com; bneely@linuxcare.com; sg@linuxcare.com Subject: Re: shell cannot see an existing file
I'm having a problem with my linux box, and can't for the life of me figure out what's wrong. Here are the symptoms:
> server:~/scripts$ whoami > alex > server:~/scripts$ ls -al ./script > - -rwxr----- 1 alex home 43747 Apr 10 22:31 ./script* > server:~/scripts$ ./script > bash: ./script: No such file or directory
(note: the '*' at the tail end of the file listing is merely a symbol specifying that its an executable file. this is not part of the filename)
Technically that "file type marker" is the result of using the -F option to 'ls'
The most likely cause of this problem is the #! (shebang) line that should be at the beginning of your script. If that is incorrect then it is common to get this error, your shell is telling you that it can't find the script's interpreter.
If './script' was a binary executable thenI'd also suggest looking for missing shared libraries. In fact, it's possible that your shebang line is pointing to some interpreter (/usr/local/ksh or something) which is present, and executable, but is missing some shared library. It is even possible that a shared library depends on another, and that this is what is missing.
As you can see from the above, I'm the owner of the file in question, and have execute permission on it. The file exists. Yet bash claims the file is not there. I've tried with shells other than bash (every other shell available on my system, including csh, tcsh, ash, and zsh). I've even tried executing the command as root, to no avail.
This exact same problem has arisen before with another script I wrote. I couldn't fix it then, either.
Check your shebang line. It should read something like:
#!/bin/sh
Note: there are NO SPACES in this line. Do NOT put a space between the #! and the interpreter's name.
I'd like to also note that this problem arises intermittently: just after finishing ~/scripts/script I created another script named "test", did chmod u+x on it and it executed just fine. ~scripts/script still refuses to execute, though :( Please note that I've tried renaming the file. I've also tried moving it to another location on the directory tree. None of these have helped.
A text file without any shebang line, which is marked as executable will be executed through some magic that is dependent on the shell wfrom which it is being invoked.
I'll probaby get this wrong in the details but the process works something like this:
You issue a command. The shell tries to simply exec() it (after performing the command line parsing necessary to expand any file "globs" and replace any shell variables, command substitution operators, parameter expansion, etc). If that execution fails the shell may attempt to call it with $SHELL -c (or it might do something a bit different: that seems to be shell dependent).
Notice that the behaviour in the first case is well-defined. Linux has a binfmt_script module (usually compiled/linked statically into your kernel) which handles a properly formatted shebang line.
I have not experienced any other problems with my system that I'm aware of. Does anyone know what could be causing this, or how to fix the problem?
I'm running linux 2.2.14 on my Pentium 120, with a Slackware distribution. The file in question exists on the root partition, in an ext2 file system, which the kernel supports. If there's any other relevant information I have provided, please don't hesitate to ask.
If you were getting "operation not permitted" I'd suggest checking your mount options to see if the filesystem was mounted 'noexec' (which would be a very badd idea for your root fs). If you were getting a message like "cannot execute binary" then I'd think that maybe you had an old a.out binary and a kernel without the a.out binfmt support.
But I'm pretty sure that you're having a problem with your shebang line.
Thanks, Alex
From Credit Future Commercial MACAU Company on Wed, 5 Apr 2000
Hello sir
I installed red hat 6.1 on my system but it does not display more =
then 256 colours although my vga card is a 16 MB vodoo why is that can = you help me out here ???I have tried startx -bpp16 but still my pics = "jpegs , bmp " qualities isnt fine !!! & are displayed in dots same pic = in windows look good .
thanksfaisal
From Heather on Wed, 5 Apr 2000
Hello sir
Heather isn't a masculine name in the U.S. I'll assume this is intended for the Answer Guy column, and give a first shot at answering it.
I installed red hat 6.1 on my system but it does not display more then
256 colours although my vga card is a 16 MB vodoo why is that can you help me out here ???I have tried startx -bpp16 but still my pics "jpegs , bmp " qualities isnt fine !!! & are displayed in dots same pic in windows look good . thanksfaisal
You have not specified what resolution under MSwin had the qualities you seek. Under X, you must run the correct video server to match your card if you want best performance, but you can nearly always get the screen working with a lesser server.
The VGA16 server only provides 256 color service. If this is what you are stuck at, you might be using this. Or, your /etc/X11/XF86Config file may be telling it to default to this level - the command to change the default is startx -- -bpp 16
with the space. Also, startx is a shell script, and launches a particular server... usually /usr/X11R6/bin/X which itself is a link to the real one... and so, you may be running a server you didn't intend.
Good luck with your JPEGs.
From fai, Answered By Heather Stern on Fri, 7 Apr 2000
Thanks for the help it worked Finally !!!!
Glad to hear it worked for you. Sorry I wasn't able to reply in email in a timely fashion - publishing deadlines, you know.
Can you help me a bit more while telling me that i want to start this command by default startx -- -bpp 32 how can i do this ????
thanks Faisal
One way would be to creat .xserverrc in your own home directory; you'd have to specify your X server, but then you could pass it parameters too:
/usr/X11R6/bin/XF86_SVGA -bpp 32
Assuming that's the right server for you, of course. If startx is just plain "getting it right" except for that itty bitty detail of colordepth, you could instead create a bash alias or put a one-liner shell script in your path. I like to keep such personal scripts in ~/bin (that's bin under my home directory). name it something much shorter like myX and save some typing too.[
So where was Jim on this one? Well, he liked my answer, and was busy with other questions and stuff to do.
-- Heather. ]
Answered By Nadeem Hasan on Mon, 03 Apr 2000
Hi,
This is in reference to above question in "The answer Guy" and its answer. The use of ipchains/ipfwadm is a bit of an overkill to achieve this. The better way is to simply use the following as root:
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all
This should cause the kernel to ignore all the ping ICMP requests.
Cheers, -- Nadeem
Just when you think you know everything.
From Nadeem Hasan on Tue, 11 Apr 2000
Hi,
The Gazette still has the old description about disabling ping echo responses. Does that mean its better than what I suggested?
Nadeem
I don't have the power to change what I've published in previous months. Your (better) suggestion on how to disable the Linux kernel's ICMP echo responses (to 'ping' requests) should appear in next month's issue.
Now, what was that magic /proc node again?
Ahh, here it is:
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all
... I'd never remember that, but the node is there and I'd recognize the meaining from the name. (So it's in my passive rather than active vocabulary).
There are some other interesting nodes there --- and I think the one about "icmp_echo_ignore_broadcasts" looks useful.
It would be neat if someone wanted to write up a HOWTO on "Useful /proc Tips and Tricks" (hint, hint). I've done some performance tuning by tweaking and playing with some of the entries under /proc/sys/vm (the virtual memory sysctl's), and I know others have even done better than I could (back at Linuxcare, I had to call on our real experts to help me out awhile back for one gig).
I guess the tips would fall into two or three general categories: robustness, security, and performance. For example the /proc/sys/kernel/cap-bound (bounding capabilities set) can be modified to secure some facilities even from a subverted 'root' process (like the BSD securelevel features), and I guess that /proc/sys/vm/overcommit_memory might allow one to prevent the system from overcommit (providing for more robust operation at the expense of reducing our capacity to run multiple concurrent "memory hogs" that ask for more core than they actually need).
A good HOWTO would be organized by objective/situation (Increasing File Server Performance, Limiting Damage by Subverted and Rogue Processes (Crackers), etc) and would include notes on the tradeoffs that each setting entails. For example one might disable ICMP response (for security?) but one should be aware that anyone who has a legimate reason to access ANY other service on your system might want to 'ping' it first to ensure that it is reachable before they (or their programs) will attempt to access any other service on it. (In other words it makes no sense to disable ICMP responses on a web, mail, DNS, FTP or other public server).
Unfortunately I don't have the time, nor nearly enough expertise to write this. There are already some notes in the Linux kernel source trees under /usr/src/linux/Documentation/sysctl/ and I remember that someone is working on a tool to automate some of this PowerTweak/Linux (http://linux.powertweak.com/news.html) comes to mind.
Anyway, enough on that.
From Apichai T. on Mon, 03 Apr 2000
Dear sir,
May I ask for your advice , the steps to set up linux box to be possible to remotely execute graphical applications.
Thanks and Best regards, Jing
Here's a couple of HOWTO and mini-HOWTO links:
- Remote X Apps mini-HOWTO
- http://www.linuxdoc.org/HOWTO/mini/Remote-X-Apps.html
(I've copied it's author, Vincent Zweije, on this reply).
I don't recommend using his example shell script from section 6.2:
!/bin/sh cookie=`mcookie` xauth add :0 . $cookie xauth add "$HOST:0" . $cookie exec /usr/X11R6/bin/X "$@" -auth "$HOME/.Xauthority"
The problem here is that the cookie variable is exposed on these command lines (which is world readable via /proc and the 'ps' command). It may also be exposed if it is exported to the environment. Safe handling must be done through pipes or files (or file descriptors). Note that the window of exposure is small --- but unnecessary. Read the 'xauth' man page for more details.
Better yet: Use ssh! (Read Vincent's HOWTO for more on that).
I also notice that Vincent doesn't distinguish between the session manager and the window manager. In practice they are almost always the same program. However here's the difference:
The session manager is the one program that is started in the foreground during the startx or xinit process. The X server tracks this one process ID. When it dies, the X server takes that as a signal to shutdown. Any program (an 'xterm', a copy of 'xclock' or whatever) can be the session manager.
The window manager is the program that receives events for the "root window" (the X Windows System term for what other windowing systems call the "wall paper" or "desktop" or "backdrop"). There's also quite a bit more to what the window manager does. You can only run one window manager on any X server at any time. Window managers (can?) implement a number of APIs that are unique to them --- so you can just use "any" X program as your window manager.
It's a subtle distinction since almost everybody uses their window manager as their session manager.
Note: If you're troubleshooting X connections keep in mind that the client must be able to connect to the server via the appropriate socket. For example, to connect to the server on :0 (localhost/unix:0) the program must be able to access the UNIX domain socket (usually in sockets that are located in /tmp/.X11-unix/) Obviously chroot() jails could interfere with that (though localhost:0, which is the same as localhost/tcp:0 should still work).
A subtle and rare problem might be if someone were to try running X after building a kernel without support for UNIX domain sockets. It's possible to build a Linux kernel with full support for TCP/IP and yet leave out the support for UNIX domain sockets.
Obviously when looking at Internet domain sockets (TCP/IP) any of the usual routing, addressing, and packet filtering issues can interfere with your clients attempts to connect to port 6000 (or 6001, 6002, etc) on the X server host.
For a little more on remote access to X server look at VNC (Virtual Network Computing from AT&T Laboratories Cambridge: http://www.uk.research.att.com/vnc) (VNC was originally developed at the Olivetti Research Laboratory, which was later acquired by AT&T).
You don't need this to just run X clients on your X server. However, it's useful to learn about VNC in case you need some of the special features that it provides.
Another good site for finding links to lots of information about X is at Kenton Lee's "X Sites" (http://www.rahul.net/kenton/xsites.html) There are about 700 links located there!
Note that while X is currently the dominant windowing system for Linux there are other efforts out there including "Berlin" (http://www.berlin-consortium.org) and the "Y Window System" (http://www.hungry.com/products/Ywindows). I don't know how these projects are going. I see that the Berlin home pages have been updated recently while the Y Window System pages seem to have been stale since March of 1998.
Anyway, good luck.