25/04/2008

Compiz-fusion: Blur and Shadows

Taaris has already posted about this with respect to katapult in dark and foggy November.
Some remarks about how to get the full monty with compiz-fusion: no shadows behind "flat" windows like the kicker panel and katapult and blurry window borders.

Taaris' tip concerning ugly shadows behind katapult's fake transparency works also for kicker, e. g.
CCSM->Window Decoration->Shadow windows: any -(name="katapult" | name="kicker")

Window decoration blurring seems to work quite well by now with nVidia AIGLX, just check the following settings:
  • CCSM->Blur Windows (activated)-> Alpha Blur (activated)

  • Emerald Theme Manager->Emerald Settings->Compiz Decoration Blur Type: All decoration

20/04/2008

Slow resizing in compiz

This has been posted all over the net, so it's more a personal memo stolen from All My Brain.
Window resizing on compiz/ATI is excruciatingly slow with the default settings. To fix that, configure the "Resize Window" plugin in the CompizConfig Settings Manager.
Resize modes Outline, Rectangle and Stretch work OK, Normal (default) keeps slowing you down.

Compiz and vanishing icons in kicker

Compiz tends to interfere with other startup processes, especially the kicker icons - in Xanthippe's case everything vanished except fusion-icon. If you start compiz with compiz-manager, there is a simple solution:
Insert a wait statement right at the beginning of the compiz-manager script (/usr/bin/compiz-manager), which should take care of any processes started right before compiz. In my case, an arbitrary number of icons still got eaten, so I added a sleep statement before that:

sleep 5
wait

ARCH=`arch`
if [ $ARCH == "x86_64" ]; then
      LIB=lib64
else
     LIB=lib
fi
...

Bash wait and sleep statements
The wait statement waits for the termination of a specific job (process ID or job spec given as argument). With no argument given, it waits for all processes to finish (that's what we did with compiz-manager). See the help wait shell command.
sleep n just delays the execution of your script by n seconds. More information in man sleep.

19/04/2008

Solving (almost) any boot problems

After reinstalling a windows in a dual boot environment, GRUB will be deleted because Windows will overwrite any MBR boot manager. The easy solution without hassle is to get a copy of the Super Grub Disk available at www.supergrubdisk.org. This nice tool will help you either reinstalling GRUB or even writing back the MBR of Windows in case you want to to so, so it should actually always be in your CDROM collection BEFORE anything goes wrong. You will be guided throught all steps, so just use it.

HowTo mount a partition image

Ok, just did a backup of a whole partition, changed the partition size and right after something went wrong - so how to restore the image?

In my case I extended an Ext3 Partition because of disk space issues. Now there is a problem, if I did dd the image (see the previous post in this blog) back to the partition, it would be exactly the same size as I created it, which I do not want to have. I would actually like to copy the CONTENT of the image to the new partition. Here is how to do this:

Uncompress the image (if it was compressed) via the command
gunzip /pathtoyourfile/file

If you stored the file on an external harddrive, I recommend to use a (k/x)ubuntu live cd. By opening a terminal and commanding sudo -s you will get a root console to do anything you want.

Now create a mountpoint whereever you want and then mount the uncompressed image as root (sudo) in the shell commanding:
mount -t ext3 -o loop,ro /pathtoyourfile/filename.img /mountpointyouwanttouse

The option loop will create a virtual device mimicing a block device for your image, the option ro will mount your image in read-only mode (to avoid accidental deletion of anything!)
As you see you can specify the file system, so this will work for all known file systems, e.g. cd/dvd iso images or ntfs images. Here's the command for mounting a CD/DVD iso image:
mount -o loop -t iso9660 /pathtoisofile/filename.iso /mountpointyouwanttouse

Now you can access the content of your image and e.g. copy it via cp or get just single files.

To restore the whole content of a linux partition you might want to use the command
cp -r -p -P /pathtoyourimage/* /pathtomountpointofyourLinuxInstallation/


For explanation: -r will do it recursively for you, -p will preserve the owner information, mode and so on (otherwise everything would belong to root afterwards!!) and -P will tell cp not to follow symbolic links (due to that recursion you would otherwise write your installation several times until the partition is out of free space!)

17/04/2008

How to create and restore partition images

You like your long-finely tuned Linux? You play around with a system often and would like to have a clean ready-to-use system in the back-hand if anything goes wrong? This might interest you then.

How to create images of partitions and restore them. (Without expensive software)

  1. Get a Live-Linux of your choice and start it
  2. Use the following command as superuser (sudo or root) to create a zipped image file of the partition of your choice
    dd if=/dev/hdax | gzip > /mntpoint/filename.img.gz
    or an uncompressed image by
    dd if=/dev/hdax of=/mntpoint/filename.img
    replace hdax by sdax where appropriate
  3. For recovering the image use (root)
    # gunzip -c /mntpoint/filename.img.gz | dd of=/dev/hdax
    or for the uncompressed version, have a look at this blog post.
    Again, replace hdax by sdax where appropriate. Ubuntu users: you need a root console (sudo -s)
A very good article about dd can be found on Wikipedia (dd)

If you want to save your image file on an external harddrive (which is probably a good idea) be careful with the file system you use and remember this limitation:

FAT32 allows for file sizes of maximal 4 GB (which most likely is much too small for any reasonable image file nowadays...)

You can use NTFS on an external harddrive but you need to mount your external harddrive by hand with read/write access - the magic word: ntfs-3g.
Another possibility to use NTFS-formatted external harddrives without manual mounting is to use a current live-cd from (x/k)ubuntu - they mount external harddrives always in read/write mode.
Backuping of a large file does indeed work, but the processing speed is very low (about 30 min for a 7,5 GB partition) due to the fact that ntfs-3g eats up a lot of processor power.

Webmin is great

Just to let you know: if you don't want to look for all the fancy configuration files your linux offers you and to edit them via vi, then this might be THE solution for you:

Webmin is a graphical webbrowser based configuration interface which you can access locally (or, as mentioned before via an ssh-port forwarding securely remotely) by typing in

https://localhost:10000

after a successful install in your preferred browser - well in konqueror because the current version seems to bitch with Mozilla Firefox

With webmin, administraton of an ssh server, filesharing via samba, user and group administration (and synchronization between the modules) and many more things you might find handy is just one mouse-click away [... that sounds like a Windows Ad]

15/04/2008

Synchronise via unison and ssh

I've seen one crashed hard disk too many last week, so I decided to tackle my long overdue synchronisation issues.
Situation: 3 computers (Archimedes, Xanthippe and Tisiphone), creative work-in-progress chaos on all three of them and one ssh-accessed backup account on our server to synchronise all three accounts.
Rsync is a nice simple console backup tool which also works with ssh, but neither runs a diff nor a modification date comparison of duplicate files. Files in the source directory are written over those in the destination directory, no matter what.
Unison is more sophisticated. It comes with a nice Gtk GUI capable of diff viewing and setting file replacement rules (e.g. last modified).

06/04/2008

S.M.A.R.T - how to hopefully avoid harddrive failures

S.M.A.R.T is a self-controlling system for harddrives and you should find it in almost any reasonable (even older) harddrive nowadays.

One problem is that you use your harddrives for a long time without knowing their actual status. But there is a very nice solution to at least get information about your harddrives called smartmontools.

Install this package with YaST, if it is not already installed on your system, then open a root console or a sudo console and use

smartctl -a /dev/hda

- change the path to the path applicable for your drive, mostly /dev/sda/ might do the trick.
You will get an output of many parameters - be careful if the values drop under or are near the threshold - in my older drives there are actually a lot of values marked pre-fail. What is there to do as well?
Oh yes:

smartctl -H /dev/hda

will give you the overall health status of your drive. You can also initiate a short or a long self-test for your drive by commanding

smartctl -t short /dev/hda or smartctl -t long /dev/hda

Please note: the long test might run several hours... after running the test you can see the result with the option -a again.

This does not make you entirely safe but you will get a good overview when you should have good backups. (BTW - you can't ever have enough backups)

04/04/2008

Bash log

If your finger hurts from typing the uparrow key to retrieve that magic console command from long ago, take a look at the ~/.bash_history file in your home directory.

You can also change the log size by setting the HISTFILESIZE and HISTSIZE variable (on my system, the default seems to be 1000). See this discussion.

Graphical secure remote server administration

In short: install webmin on the server you want to administrate remotely, but allow only connections from "localhost". Give an account of your choice - preferably not any root-like one - ssh access. Build a tunnel to your server's local port 10000
via

ssh -L <chooseaport>:localhost:10000 <username>@<IPofremotemachine>

Now you can access the remote webmin interface simply by opening a browser of your choice on your machine and typing

localhost:<chooseaport>

into the address bar. Nice easy, and secure!

Additionally: if your SSH server is listening on a different port than the standard port 22, you can connect to it using the -p option, i.e.

ssh -p <portnumber> -L <chooseaport>:localhost:10000 <username>@<IPofremotemachine>

03/04/2008

VNC server over a secure SSH connection

The standard way provided by openSuSE to get a remote graphical login to your linux desktop is Krfb. It uses an obscure kind of 'invitation' system and I never got the hang of it. TightVNC works like any old VNC server on Linux: start a VNC X session and log in with password protection from a remote machine via a VNC viewer like Krdc (KDE3 version works for me, KDE4 doesn't). Unhappily, a VNC connection is not exactly secure, anyone sniffing into your network might find out what you are doing at the moment on the remote machine (e.g. typing passwords). A safe alternative would be to export the VNC session over a SSH tunnel as follows.

Google Earth 4.2 with ATI - fglrx and XGL on OpenSuSE 10.3

Found somewhere in the net and adapted to OpenSuSE 10.3: after installing Google Earth doesn't start and eats up about 99% of your processor time.

The solution is to copy libGL.so.1 into the installation directory of Google Earth:

cd /opt/google-earth
sudo wget http://shanti.mojo.cc/docs/libGL.so.1


and you can enjoy google-earth


Found here.