Monday, December 31, 2012

Optical illusion: Can the brain be tricked?

Do you want to see if in the image below the outer circles can rotate? Focus on the center and move your head back and forth:


Saturday, December 8, 2012

Another brain puzzle (image)

Optical illusion to trick your brain

Is this image made of concentric circles?

Why is skb recycling done (Linux)

Why is this done?
* Saves the cost of allocating and de-allocating memory repeatedly.
* Savings are significant because this is a very frequent operation (usually skb alloc and de-alloc is done on a per packet basis).

Recent changes to SKB recycling:

"- Make skb recycling available to all drivers, without needing driver
  modifications.

- Allow recycling skbuffs in more cases, by having the recycle check
  in __kfree_skb() instead of in the ethernet driver transmit
  completion routine.  This also allows for example recycling locally
  destined skbuffs, instead of only recycling forwarded skbuffs as
  the transmit completion-time check does.

- Allow more consumers of skbuffs in the system use recycled skbuffs,
  and not just the rx refill process in the driver.

- Having a per-interface recycle list doesn't allow skb recycling when
  you're e.g. unidirectionally routing from eth0 to eth1, as eth1 will
  be producing a lot of recycled skbuffs but eth0 won't have any skbuffs
  to allocate from its recycle list."

Note: Generic skb recycling is slightly slower than doing it in the driver.

Custom implementation

If you need to implement SKB recycling for your kernel module you could use this approach.

SKB recycling can be implemented in a simple fashion. The way to do it would be through implementing and using wrapper functions for:
* dev_alloc_skb() and 
* kfree(skb)

Say for example, you can implement a custom_dev_alloc_skb() and custom_kfree() which is called by your programs. You can maintain a simple array of skb pointers and allocate and de-allocate skbs from this pool



Sunday, November 4, 2012

Multiuser MIMO: A brief introduction


  • Simultaneous transmissions to multiple stations. Extends the idea of MIMO where multiple streams are sent to a single stations. Here the multiple streams are sent to different stations.
  • Other cases where the MU-MIMO model applies- "The downlink of a DSL system with crosstalk between the wires for each user is one scenario where the transmitter terminals can cooperate, but the far end of the MIMO channel cannot."
  • Capacity of MU-MIMO channel in accordance with the dirty paper approach: It has been shown that  in the case where the interference on the channel is known before hand, the achievable capacity is similar to that which can be achieved without interference.
  • Two approaches to MU-MIMO: (1) Signal processing  and (2) Dirty paper approach.
  • MIMO channels are represented by the standard equation: y = Hx + w, where H is the transfer function of the channel whose dimensions are determined by the receivers(nr) x transmitters(nt).
  • MU-MIMO is more suited to WLANs rather than cellular because the channel is rich with multipath and is quasi-static. Cellular is difficult because of cost constraints, mobility,and small cell size.
  • Most recent MU-MIMO works assumes that the channel state information (CSI) is available at the transmitter.
  • SU-MIMO benefit from CSI only when nt > nr or when operating with low SNRs.
  • MU-MIMO systems always benefit from CSI.
  • Obtaining CSI: For TDD systems, use of training or pilot data UL (because same channel is used - assuming reciprocity). For FDD systems, explicit feedback from the receiver is used based on the training data sent downlink.
  • Multiple access interference (MAI) is the interference caused to one user because of simultaneous transmission to other users. Techniques like multiuser detection (MUD) could be used to detect signals.
  • Ideally, using CSI, MAI should be mitigated at the transmitter. 
  • Capacity of MU-MIMO channel is based on the fraction of power allocated to each of the users of the system.
  • Channel inversion is a linear processing technique used at the transmitter to mitigate MAI. x = H†d = H* (HH*)–1d
  • Expected capacity improvement with MU-MIMO is min(nT,nR). However, if the channel matrix H is ill conditioned, then the gains with linear processing are not achieved.

Difference between WiMAX and LTE Physical and MAC layers

This post will cover the technological differences between these two technologies and contains a brief bullet point description as to why LTE took off and WiMAX did not:

  • Slot times: LTE uses much smaller slot times - 1ms as opposed to 4ms which gives much worse delay performance with multiple users and does not scale just as well.
  • Uplink modulation: LTE introduced SC-FDMA which dramatically improved uplink performance for cellular systems. This modulation technique combines the advantages of low peak to average ratio of traditional systems (such as GSM) and multipath resistance of newer modulation schemes (such as OFDM). SC-FDMA also provides savings for the mobile users (on their uplink).
  • Timing: WiMAX was the first to start off. Hence most of the experiments were performed on this and LTE could learn from their experience and MISTAKES. WiMAX was initially designed for fixed systems rather than mobile systems and they were not able to adapt well for usage with cellular providers.

Other non MAC/PHY differences:
  • WiMAX is based on IEEE standards (specifically, the 802.16 family), and then managed by the WiMAX Forum. LTE is defined by 3GPP.  
  • WiMAX was originally designed for fixed networks and has gradually evolved into a mobile network. But this has resulted in some changes not being  made correctly. LTE was designed as a mobile network from the first day. This particularly impacts the power at the receiver (handhelds). WiMAX handhelds are slated to consume more power as compared to LTE.

What is a metal spin or chip spin?

A silicon wafer is "photo-etched", where the masking needed to etch out the different layers to produce the desired circuitry pattern is placed upon to wafer by "projecting" the image on the wafer with light (like film photography), before being chemically processed to actually etch out the pattern. The light-sensitive chemical for the masking is placed on the wafer while it is spinning to produce a uniform coat.

Respins would be where this light sensitive material is reapplied to a previously processed wafer so a new masking can be applied and the wafer re-etched to correct bugs detected in the previous run.

That probably refers to "spin-coating", a method of adding layers to a wafer. In this, a wafer is fixed on a turntable, and the photochemically active lacquer is sprayed on it's surface. The wafer then is spinned at the necessary speed to evenly distribute the material to the desired thickness.
The layer is then exposed to light through a mask, and after that, the non-exposed parts of the layer are etched away. If a new layer has to be added, that is a respin.

A brief history of the Indian subcontinent BC

     3300 BC - 1700 BC      Indus Valley Civilization.
     1700 BC - 1300 BC      Late Harappan Culture.
     1500 BC - 500 BC       Vedic Civilization..
     1200 BC - 316 BC       Kuru dynasty.
               1000 BC      Aryans expand into the Ganga valley.
               900 BC       Mahabharata War.
               800 BC       Aryans expand into Bengal. Beginning of the Epic Age:
                            Mahabharata composed. First version of Ramayana.
      700 BC - 321 BC       Maha Janapadas
      684 BC - 321 BC       Madadha Empire
               550 BC       Composition of the Upanishads
               544 BC       Buddha's Nirvana
               327 BC       Alexander's Invasion
               325 BC       Alexander marches ahead till Multan
               324 BC       Chandragupta Maurya defeats Seleacus Nicator
               322 BC       Rise of the Mauryas: Chandragupta establishes first Indian
                            Empire
      321 BC - 180 BC       Mauryan Empire
               298 BC       Bindusara Coronated
               272 BC       Ashoka begins regin
               180 BC       Fall of the Mauryas. Rise of the Sungas by 
Pushyamatra Sunga

Saturday, September 1, 2012

Comparison of 802.11n and legacy (non 802.11n) beacons

In terms of sheer size the 802.11n beacons are almost 4 times as large at about 225bytes, while the legacy beacons are usually only 62bytes.

This big difference is due to the HT (high throughput) information element (IE) in the new beacon. This contains all the information relating to the HT features supported by this 11n AP. For example it will include information about the max. AMPDU and AMSDUs supported by this AP.

The second HT IE is known as the extended information element. It provides information like whether the secondary channel is above or below the primary channel.

With 802.11ac, the beacon sizes have grown even more. Additional IEs are now supported for different other features which result in the beacon becoming relatively large.

Friday, August 31, 2012

What does IRQ save do in Linux?


Use local_irq_save to disable interrupts on the local processor and remember their previous state. The flags can be passed to local_irq_restore to restore the previous interrupt state.


void local_irq_save(unsigned long flags);
void local_irq_restore(unsigned long flags);

The spinlock version will disable interrupts on all the cores*


What is a gratuitous ARP?

ARP may also be used as a simple announcement protocol. This is useful for updating other hosts' mapping of a hardware address when the sender's IP address or MAC address has changed. Such an announcement, also called a gratuitous ARP message, is usually broadcast as an ARP request containing the sender's protocol address (SPA) in the target field (TPA=SPA), with the target hardware address (THA) set to zero. An alternative is to broadcast an ARP reply with the sender's hardware and protocol addresses (SHA and SPA) duplicated in the target fields (TPA=SPA, THA=SHA).
An ARP announcement is not intended to solicit a reply; instead it updates any cached entries in the ARP tables of other hosts that receive the packet. The operation code may indicate a request or a reply because the ARP standard specifies that the opcode is only processed after the ARP table has been updated from the address fields.
Many operating systems perform gratuitous ARP during startup. That helps to resolve problems which would otherwise occur if, for example, a network card was recently changed (changing the IP-address-to-MAC-address mapping) and other hosts still have the old mapping in their ARP caches.
Gratuitous ARP is also used by some interface drivers to provide load balancing for incoming traffic. In a team of network cards, it is used to announce a different MAC address within the team that should receive incoming packets.

Friday, August 10, 2012

How to move files between perforce changelists cls

If you have a file opened as part of an existing changelist, how do we move the file to a different changelist.

Say for example you have created a CL234 and you want to move a file that is already present in your default Cl with the name //depot/test/main.c to your CL234.

The command to do this would be:
p4 reopen -c 234 //depot/test/main.c
The -c switch in the above command is used to specify the changelist#.

How to revert a perforce changelist (CL) in Linux

Unless you have the UI for perforce where you can use p4v, there are a set of steps you  need to follow to backout changes from perforce.

To backout the most recent CL (say CL100):


  1. p4 sync @99
  2. p4 edit //depot/foo.txt //depot/bar.txt //depot/ola.txt
  3. p4 sync
  4. p4 resolve -ay
  5. p4 submit

sync will take the repo to an older version (before the cl). Edit will open your file to be reverted. sync will bring the rest of the view to the CL after 99. Resolve and submit will cause you to go back to the old state. Remember to "accept yours" during the resolve stage.

Detailed instructions on backing out changes in other scenarios is listed here.



Silent reboot happens when a computer system crashes but does not produce a stack trace for debugging.
A silent reboot can also happen because of misconfiguration of the system

Silent boot is a similar concept where the user of the system configures it for booting with minimum verbosity.

To achieve a silent boot in a Linux system, I found some notes online:

Tuesday, August 7, 2012

Error called object !=0ul is not a function

Usually happens due to the mismatch of braces or the missing of an operator on the indicated line. 

The compiler does not find an operator where it needs to and interprets it as an improper function call.

Wednesday, July 25, 2012

What is softc in the networking driver?

Softc is a misc state maintained in a device driver. You are likely to come across this term when reading some structure definition. For example: ath_softc is used in the ath9k  wireless driver.

Under the NetBSD model and most Linux systems, a softc based structure should contain the first element as the device for which this state is maintained. For example in the ath_softc structure the first element is the
struct ieee80211_hw *hw for which the state is maintained. Some functions which use this structure:

int ath_startrecv(struct ath_softc *sc);
bool ath_stoprecv(struct ath_softc *sc);
void ath_flushrecv(struct ath_softc *sc);
u32 ath_calcrxfilter(struct ath_softc *sc);
int ath_rx_init(struct ath_softc *sc, int nbufs);
void ath_rx_cleanup(struct ath_softc *sc);
int ath_rx_tasklet(struct ath_softc *sc, int flush, bool hp);

What is a MLME? Is it present in a FullMAC driver?

MLME stands for Medium access control (MAC) sublayer management entity. This is the entity where the MAC state machines reside. Examples of states a MLME may assist in reaching:
  • Authenticate
  • Deauthenticate
  • Associate
  • Disassociate
  • Reassociate
  • Beacon
  • Probe
  • Timing Synchronization Function (TSF)
In the ath5k and ath9k drivers, the mac80211's MLME management implementation is currently handled by net/mac80211/ieee80211_sta.c. This handles only the STA MLME

Saturday, July 21, 2012

Implementation difference between a Linux process and a thread?

Difference between running a thread and a process is that when you are running a thread some data-structures are shared across the thread. For doing the same we will need IPC.

According to a nice post I found online here is the explanation:
inux uses a 1-1 threading model, with (to the kernel) no distinction between processes and threads -- everything is simply a runnable task. *
On Linux, the system call clone clones a task, with a configurable level of sharing, among which are:
  • CLONE_FILES: share the same file descriptor table (instead of creating a copy)
  • CLONE_PARENT: don't set up a parent-child relationship between the new task and the old (otherwise, child's getppid() = parent's getpid())
  • CLONE_VM: share the same memory space (instead of creating a COW copy)
fork() calls clone(least sharing) and pthread_create() calls clone(most sharing). **

Saturday, June 30, 2012

Why are there #define instructions inside a structure


Consider the following structure. What is the need for having the macro definitions #define within the structure?
- The only reason for doing that is because they are directly related to the element in the structure. For example, in this case, IEEE80211_NODE_PWR_MGT is associated with the element ni_flag.

struct ieee80211_node {
  112         struct ieee80211vap     *ni_vap;   /* associated vap */
  113         struct ieee80211com     *ni_ic;    /* copy from vap to save deref*/
  114         struct ieee80211_node_table *ni_table;/* NB: may be NULL */
  115         TAILQ_ENTRY(ieee80211_node) ni_list; /* list of all nodes */
  116         LIST_ENTRY(ieee80211_node) ni_hash; /* hash collision list */
  117         u_int                   ni_refcnt;  /* count of held references */
  118         u_int                   ni_scangen; /* gen# for timeout scan */
  119         u_int                   ni_flags;
  120 #define IEEE80211_NODE_AUTH     0x000001    /* authorized for data */
  121 #define IEEE80211_NODE_QOS      0x000002    /* QoS enabled */
  122 #define IEEE80211_NODE_ERP      0x000004    /* ERP enabled */
  123 /* NB: this must have the same value as IEEE80211_FC1_PWR_MGT */
  124 #define IEEE80211_NODE_PWR_MGT  0x000010    /* power save mode enabled */
  125 #define IEEE80211_NODE_AREF     0x000020    /* authentication ref held */
  142      ...........................
}

Wednesday, June 6, 2012

Format string for printk with different types of variables in C

int %d or %x
unsigned int %u or %x
long %ld ot %lx
unsigned long %lu or %lx
long long %lld or %llx
unsigned long long %llu or %llx
size_t %zu or %zx
ssize_t %zd or %zx

Raw pointer value SHOULD be printed with %p.

u64 SHOULD be printed with %llu/%llx, (unsigned long long):

printk("%llu", (unsigned long long)u64_var);

s64 SHOULD be printed with %lld/%llx, (long long):

printk("%lld", (long long)s64_var);

If type is dependent on config option (sector_t), use format specifier
of biggest type and explicitly cast to it.

Reminder: sizeof() result is of type size_t.

Tuesday, May 22, 2012

What is a softlockup? How do I debug it?

A softlockup occurs when there exists a kernel thread or a process that does not relinquish control of a CPU for a period of time (the softlockup_thresh setting ).  This can happen only with kernel space processes.

Softlockups are detected by a watchdog process. These softlockups can be typically caused by software bugs, say if you cause it to run in an infinite loop.

Wednesday, May 16, 2012

Difference between angle bracket < > and double quotes “ ” while including header files in C?

From stackoverflow:
It's compiler dependent. That said, in general using " prioritizes headers in the current working directory over system headers. <> usually is used for system headers. From to the spec (Section 6.10.2):

Wednesday, May 9, 2012

How is diff implemented in Unix / How does it work?

diff essentially solves the classical computer science problem of finding the longest common subsequence (LCS).

The LCS problem has an optimal substructure: the problem can be broken down into smaller, simple "subproblems", which can be broken down into yet simpler subproblems, and so on, until, finally, the solution becomes trivial. The LCS problem also has overlapping subproblems: the solution to a higher subproblem depends on the solutions to several of the lower subproblems. Problems with these two properties—optimal substructure and overlapping subproblems—can be approached by a problem-solving technique called dynamic programming, in which the solution is built up starting with the simplest subproblems. The procedure requires memoization—saving the solutions to one level of subproblem in a table (analogous to writing them to a memo, hence the name) so that the solutions are available to the next level of subproblems. 

diff internally creates an edit distance table. i.e the line by line difference across two input files, and calculates the minimum number of changes required to translate one file into the other. A very nice and simple implementation. I found a very nice and simple implementation of calculating edit distance between two strings online:


#include 

#define MAXLEN 80

int findMin(int d1, int d2, int d3) {
   /*
    * return min of d1, d2 and d3.
    */
   if(d1 < d2 && d1 < d3)
       return d1;
   else if(d1 < d3)
       return d2;
   else if(d2 < d3)
       return d2;
   else
      return d3;
}

int findEditDistance(char *s1, char *s2) {
    /*
     * returns edit distance between s1 and s2.
     */
   int d1, d2, d3;

   if(*s1 == 0)
       return strlen(s2);
   if(*s2 == 0)
       return strlen(s1);
   if(*s1 == *s2)
       d1 = findEditDistance(s1+1, s2+1);
   else
       d1 = 1 + findEditDistance(s1+1, s2+1);    // update.
   d2 = 1+findEditDistance(s1, s2+1);                   // insert.
   d3 = 1+findEditDistance(s1+1, s2);                   // delete.

   return findMin(d1, d2, d3);
}

int main() {
    char s1[MAXLEN], s2[MAXLEN];

    printf("Enter string 1: ");
    gets(s1);

    while(*s1) {
        printf("Enter string 2: ");
        gets(s2);
        printf("Edit distance(%s, %s) = %d.\n", s1, s2, findEditDistance(s1, s2));
        printf("Enter string 1(enter to end): ");
        gets(s1);
    }

    return 0;
}


The basic algorithm for diff is described in "An O(ND) Difference Algorithm and its Variations", Eugene W. Myers, 'Algorithmica' Vol. 1 No. 2, 1986, pp. 251-266; and in "A File Comparison Program", Webb Miller and Eugene W. Myers, 'Software--Practice and Experience' Vol. 15 No. 11, 1985, pp. 1025-1040. The algorithm was independently discovered as described in "Algorithms for Approximate String Matching", E. Ukkonen, `Information and Control' Vol. 64, 1985, pp. 100-118

Tuesday, May 8, 2012

What is memoization? Explanation with an example.


Memoization is the process of storing partially computed subproblems in a table to reduce the compute time of your problem.

We will explain the concept of memoization through a nice example from Wikipedia on fibonacci series:

Fibonacci sequence
Here is a naïve implementation of a function finding the nth member of the Fibonacci sequence, based directly on the mathematical definition:
   function fib(n)
       if n = 0 return 0
       if n = 1 return 1
       return fib(n − 1) + fib(n − 2)
Notice that if we call, say, fib(5), we produce a call tree that calls the function on the same value many different times:
  1. fib(5)
  2. fib(4) + fib(3)
  3. (fib(3) + fib(2)) + (fib(2) + fib(1))
  4. ((fib(2) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))
  5. (((fib(1) + fib(0)) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))
In particular, fib(2) was calculated three times from scratch. In larger examples, many more values of fib, or subproblems, are recalculated, leading to an exponential time algorithm.
Now, suppose we have a simple map object, m, which maps each value of fib that has already been calculated to its result, and we modify our function to use it and update it. The resulting function requires only O(n) time instead of exponential time:
   var m := map(0 → 0, 1 → 1)
   function fib(n)
       if map m does not contain key n
           m[n] := fib(n − 1) + fib(n − 2)
       return m[n]
This technique of saving values that have already been calculated is called memoization; this is the top-down approach, since we first break the problem into subproblems and then calculate and store values.

Solution: function declaration isn't a prototype in C

This problem is seen when you declare a function separately for example:

int getData();

and then say somewhere within your code this function is defined as:

int getData() {
....
.....
}

The reason for this error is that C treats a function defn  fun1() and fun1(void) as different functions. If you change the declaration above as:
int getData(void);

your problem will be solved.

Saturday, May 5, 2012

What is a server for?

One of my naive friends asked me this. As the name suggests, a server is a machine that serves webpages, or web applications. Example of a server is the google blogger server which hosts this blog. A server can be running on different types of hardware ranging from a simple desktop to elaborate blade servers. They can be running different operating systems like Windows or Linux.

Wednesday, May 2, 2012

Can a meteorite burn my house or damage it?

No. When the meteor is passing through the last 7miles of the earth's atmosphere it cools down. This is the part that is referred to as the dark flight.  There has never been any documented evidence of a burning or even a hot meteorite hitting the earth's surface. If it was hot, it is possibly a part of an aircraft or even a UFO :)

Sunday, April 15, 2012

Difference between blocking and non-blocking callbacks

As per wiki:
There are two types of callbacks: blocking callbacks (also known as synchronous callbacks or just callbacks) and deferred callbacks (also known as asynchronous callbacks). These two design choices differ in how they control data flow at runtime. While blocking callbacks are invoked before a function (in the example above: main) returns, deferred callbacks may be invoked after a function returns. The above is an example of a blocking callback. Deferred callbacks are often used in the context of I/O operations or event handling. While deferred callbacks imply the existence of multiple threads, blocking callbacks are often (but not always) relying on a single thread. Therefore blocking callbacks are no common cause for synchronization.

Difference between mallocing and local variables

It is interesting to see what is the difference between these two types of variable declarations
char* name = malloc(256*sizeof(char));
// more code
free(name);
 char name[256];
 Has been nicely explained on stackoverflow:
In the first code, the memory is dynamically allocated on the heap. That memory needs to be freed with free(). Its lifetime is arbitrary: it can cross function boundaries, etc.
In the second code, the 256 bytes are allocated on the stack, and are automatically reclaimed when the function returns (or at program termination if it is outside all functions). So you don't have to (and cannot) call free() on it. It can't leak, but it also won't live beyond the end of the function. Choose between the two based on the requirements for the memory.
 

Saturday, April 14, 2012

What happens in a Linux sysctl call? What is sysctl?

Sysctl interface is a mechanism exported under the proc file system at /proc/sys. This interface allows you to read and change the current running configuration of the Linux kernel. Typically, this involves reading and writing files under the /proc/sys virtual file system.

802.11n Dynamic MIMO power save mode

Introduction
Dynamic MIMO power save is used by 802.11n radios for power saving across multiple tx-rx chains.
Running multiple radio chains increases power consumption, and also increase the achievable data rate by a larger amount. However, it is not useful to have all chains available all the time. So this mechanism is useful in shutting unused chains to conserve power. So for example if a receiver is only receiving beacons from an AP and not doing anything else, running two chains will  not be useful.

Hence, if the client is operating in a conservative battery mode, it could downshift to a minimal 1x1 configuration by negotiation with the AP. The AP can activate the full MIMO mode in the client by sending an RTS (request to send) frame. This mode is optional in the 802.11n standard.

Debugging and detecting 802.11n MIMO power save: 
If your wireless trace shows low throughput performance and a lot of RTS packets sent from the AP to the client, it could be indicative of this power save scheme being in use. To change this setting, change the power manager settings on the client to prevent this from being used. I have observed this a a lot with Intel3600 client chipsets working with Atheros AP chipsets.

Tradeoff
Recent studies have also suggested that it may not be always advantageous to use SMPS at the client since in some cases the receiver can end up spending more energy irrespective.

Difference between dynamic mimo PS and SMPS
+ This power save mechanism is also referred to as spatial multiplexing power save (SMPS).
+ This is also available on all newer WLAN technologies such as 802.11ac


Thursday, April 5, 2012

Live Monitoring and Writing Raw 802.11 Packets

This is an excerpt from a complete article I found online. Interesting:
The madwifi driver can be used in a live "monitor" mode, by creating a monitor VAP and sending packets to it. All packets sent to a monitor mode VAP will bypass any state machine.  

To create a monitor VAP, use:  
wlanconfig ath1 create wlandev wifi0 wlanmode monitor  ifconfig ath1 up  
Finally, you can choose to receive packets on ath1 in several different packet formats:  
echo '801' > /proc/sys/net/ath1/dev_type # only 802.11 headers  
echo '802' > /proc/sys/net/ath1/dev_type # prism2 headers  
echo '803' > /proc/sys/net/ath1/dev_type # radiotap headers  
echo '804' > /proc/sys/net/ath1/dev_type # atheros descriptors

Sunday, April 1, 2012

Why is the linux kernel code written only in C

The kernel code or kernel-space code is only written in C and no other language.

Ever wondered why?

This was nicely answered on one  of the forums:

"In fact, in Linux we did try C++ once already, back in 1992. It sucks. Trust me - writing kernel code in C++ is a BLOODY STUPID IDEA.
"The fact is, C++ compilers are not trustworthy. They were even worse in
1992, but some fundamental facts haven't changed: 1) the whole C++ exception handling thing is fundamentally broken. It's _especially_ broken for kernels. 2) any compiler or language that likes to hide things like memory allocations behind your back just isn't a good choice for a kernel. 3) you can write object-oriented code (useful for filesystems etc) in C, _without_ the crap that is C++."

Monday, March 19, 2012

Setting up quick path based access on SVN server or repository

From a post I found somewhere on the web:

In your svn\repos\YourRepo\conf folder you will find two files, authz and passwd. These are the two you need to adjust.
In the passwd file you need to add some usernames and passwords. I assume you have already done this since you have people using it:
[users]
User1=password1
User2=password2
Then you want to assign permissions accordingly with the authz file:
Create the conceptual groups you want, and add people to it:
[groups]
allaccess = user1
someacces = user2,user3
Then choose what access they have from both the permissions and project level.
So let's give our "all access" guys all access from the root:
[/]
@allacces = rw
But only give our "some access" guys read-only access to some lower level project:
[/someproject]
@someaccess = r
You will also find some simple documentation in the authz and passwd files.

svnserve.conf: Option expected svnserve.conf

when you edit svnserve.conf, you cannot have a space before you set an argument and a variable. It doesn’t find the argument.

Solution: Open svnserve.conf and delete extra spaces at the beginning of any line. (Usually where a # (pound) sign was removed to comment a line)
  • svnserve conf:12: Option expected
  • svnserve conf Option expected
  • SVN Option expected
  • option expected
  • svnserve option expected
  • svnserve conf 12 option expected
  • svnserve conf
  • svnserve conf:12 option expected
  • conf/svnserve conf:12: Option expected
  • Option expected svn

Splitting Bibtex file into multiple files

I was trying to split my one large Bibtex file into multiple bibtex files. I tried different commands such as \input or \include. However, none of them worked.

I also tried to include multiple \bibliography commands in my main file. But that resulted in "Illegal, another \bibdata command" error.

Solution: \bibliography{file1,file2} where your bibtex files are named file1.bib and file2.bib and are in the same directory should always work.

Thursday, March 8, 2012

(Solution) p4 Error: Can’t clobber writable file Perforce

Files are read only when they are checked out as a part of perforce. However, if for some reason they become writable you can see this error. I fixed it by changing the file permissions on the entire directory as:

chmod 755 -R *
p4 sync ...
 If you have any open (edited) files, then please move them somewhere else, do a force synch and move them back.

Wednesday, February 29, 2012

Filter expression for wireshark to check WMM traffic of different types

BK:  udp && wlan.qos.priority == 0
BE:  udp && wlan.qos.priority == 1
VI:  udp && wlan.qos.priority == 5
VO:  udp && wlan.qos.priority == 6

Saturday, February 25, 2012

More neutrino jokes

Here are some neutrino jokes collected from around the web:
-We don’t allow faster than light neutrinos in here, said the bartender. A neutrino walks into a bar.
- Neutrino. Knock knock.
- Hipsters liked neutrinos before they arrived.
- I wrote a speed of light joke…but a neutrino beat me to it.
- A. To prove particles can travel faster than light Q. Why did the neutrino cross the road?
- I’m going to tweet my neutrino joke yesterday.
- Want to hear a joke about neutrinos? It’d probably go straight through you.
- If that #neutrino is faster than light does that explain why physicists never saw it coming?
Do neutrinos go faster than light?
Some physicists think that they might.
In the cold light of day,
I am sorry to say,
The story is probably shite.
- lumidek Luboš Motl

Friday, February 24, 2012

How to block indent code in vi or vim

The > command is to be used.

* To indent a block of 5 lines use 5>>

* To visually edit a block of lines use vjj> 
(v will start the visual mode, jj will select the lines to be indented, and > will result in the indent).

* To indent a curly-braces block, put your cursor on one of the curly braces and use >%

* If you’re copying blocks of text around and need to align the indent of a block in its new location, use ]p instead of just p. This aligns the pasted block with the surrounding text.

Running multicast traffic with iperf

Run an iperf server, and bind it to a multicast address:
mymachine1:/root# iperf -s -u -B 224.0.55.55 -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 224.0.55.55
Joining multicast group  224.0.55.55
Receiving 1470 byte datagrams
UDP buffer size: 41.1 KByte (default)
Run the multicast client. This will send the required IGMP messages. If your router has IGMP snooping enabled, multicast should work smoothly.
mymachine2:/root# iperf -c 224.0.55.55 -u -T 32 -t 3 -i 1
------------------------------------------------------------
Client connecting to 224.0.55.55, UDP port 5001
Sending 1470 byte datagrams
Setting multicast TTL to 32
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[  3] local 10.1.10.3 port 51296 connected with 224.0.55.55 port 5001
[  3]  0.0- 1.0 sec    129 KBytes  1.06 Mbits/sec
[  3]  1.0- 2.0 sec    128 KBytes  1.05 Mbits/sec
[  3]  2.0- 3.0 sec    128 KBytes  1.05 Mbits/sec
[  3]  0.0- 3.0 sec    386 KBytes  1.05 Mbits/sec
[  3] Sent 269 datagrams
IGMP messages seen by sniffing packets:
mymachine3:/root# tcpdump -nevv -i xl0 -s 1515 igmp
tcpdump: listening on xl0, link-type EN10MB (Ethernet), capture size 1515 bytes
06:28:40.887868 00:c0:aa:1c:77:85 > 00:c0:aa:1c:33:99, ethertype IPv4 (0x0800),
 length 46: (tos 0x0, ttl   1, id 59915, offset 0, flags [none], proto: IGMP (2),
 length: 32, options
 ( RA (148) len 4 )) 10.1.10.2 > 224.0.55.55: igmp v2 report 224.0.55.55
06:28:42.196233 00:c0:aa:1c:77:85 > 01:00:5e:00:00:02, ethertype IPv4 (0x0800),
 length 46: (tos 0x0, ttl   1, id 59920, offset 0, flags [none], proto: IGMP (2),
 length: 32, options
 ( RA (148) len 4 )) 10.1.10.2 > 224.0.0.2: igmp leave 224.0.55.55

 * MAC addresses have been changed for privacy.

Why a bash script cannot change environment variables

Read in an online forum:
Your shell process has a copy of the parent's environment and no access the parent process's environment whatsoever. When your shell process terminates any changes you've made to its environment are lost. Sourcing a script file is the most commonly used method for configuring a shell environment, you may just want to bite the bullet and maintain one for each of the two flavors of shell.

Thursday, February 23, 2012

Difference between L2 (layer 2) and L3 (layer 3) multicast

MAC Multicast address generation

Why is multicast dealt with at two layers?
Multicast was initially designed as a layer 3 functionality, where multple hosts from a network could subscribe to a multicast address.
However, the major deficiency in such an approach is that after a router has decided which port to forward packets from a particular multicast to, the IP layer multicast is usually translated to a MAC broadcast on a switch. This results in several inefficiencies. L2 frames are received by host which are not desired. In order to avoid this the switch does IGMP snooping (read over hearing), and marks ports on the switch which are subscribing to that multicast. It then keeps track of the ports by using a destination multicast address that is obtained by conversion of the L3 multicast address to L2 multicast frame address

Conversion of L3 multicast address to L2 MAC address: (based on information on a microsoft website)
"To support IP multicasting, the Internet authorities have reserved the multicast address range of 01-00-5E-00-00-00 to 01-00-5E-7F-FF-FF for Ethernet and Fiber Distributed Data Interface (FDDI) media access control (MAC) addresses. As shown in Figure above, the high order 25 bits of the 48-bit MAC address are fixed and the low order 23 bits are variable."

Two approaches to implementing multicast on switches:
* Pure IGMP snooping in L2 switches
* Deep packet IGMP inspection in L3 switches

Wednesday, February 22, 2012

Check the number of file systems supported on Linux

Maintained under /proc.

Virtual file system  is the system call interface and maintains and registers all file systems in a linked list as shown in the fig above.

Files and directories are maintained as Inodes
Root of the filesystem: Superblock
Operations supported on superblock: create_inode, destroy_inode, read_inode, and write_inode among others.

Command to check the file systems supported on your machine:
cat /proc/filesystems

Checking the mounted file systems along with their mount points:
mount

Saturday, February 18, 2012

Killer preprocessors: Preventing a C code from compiling

And avoiding rapid detection :)

#define FALSE 1
#define FALSE exit()
#define while if
#define goto
#define struct union
#define assert(x)
#define volatile
#define continue break
#define double int
#define long short
#define unsigned signed

Sunday, February 5, 2012

Makefile older than makefile.org solution (ubuntu)

There could be two sources for the problem. Try both (if needed).

1) Make sure you clean all temporary files while building. Make sure you clean all temporary files.

2) If that does not solve the problem, then check if the system clock is synched. Typically this happens because the presented dependencies are newer than your system time. Be sure to check the current (actual time) and your system time.  Commands to correct the system time:
apt-get install netdate
netdate tcp 128.2.136.71

Create /var/spool/cron/tabs/root:

# update time with ntp server
0 3,9,15,21 * * * /usr/sbin/netdate 128.2.136.71

Then run:
chmod 600 /var/spool/cron/tabs/root
/etc/init.d/cron restart

How a hilbert curve can be used to represent linear space in 2D

Consider a set of numbers from say 0 to 16 and you want to represent that using a square. Then we could represent those numbers using a 2nd order hilbert curve as follows:

0---1   14--15
    |   |
3---2   13--12
|            |
4   7---8   11
|   |   |    |
5---6   9---10
 

Merge list error while running apt-get update or apt-cache search

This is a nasty error that can occur while you are using the Ubuntu package manager.  It looks something like this:
E:Encountered a section with no Package: header,
E:Problem with MergeList /var/lib/apt/lists /us.archive.ubuntu.com_ubuntu_dists_natty_main_binary-i386_Packages,
E:The package lists or status file could not be parsed or opened.
This results because the package manager is unable to merge the newly determined lists with existing temporary ones in /var/lib/apt/lists.

Solution: This can be fixed by deleting the temporary lists:
sudo rm /var/lib/apt/lists/* -vf
and re-building the database:
sudo apt-get update

Solution gd.h not found error while installing

Ran into this error while trying out a nice graphics utility. Found out that this  is the missing graphics development (GD) library. So installed the corresponding package:
sudo apt-get install libgd-xpm-dev
This solved the problem.

On redhat/fedora systems try:
yum install gd-devel

Thursday, February 2, 2012

Difference between MPDU, MSDU, AMPDU, and AMSDU in 802.11n and 802.11ac

Difference in implementation:
If we see the packet being handed from the IP layer to the MAC layer the following sequence of processing is seen. The packet goes from the IP layer to the MAC-llc (logical link control layer) or the upper MAC. This is also called as the MAC service access point (MAC-SAP).

MSDU: Is the MAC service data unit. This is the unit of transmission used at the MAC layer which is received from the upper layer.

AMSDU: Aggregation of the  MSDUs directly performed at the MAC layer is called an AMSDU [2]. Such AMSDU's are now passed to the lower PHY layer, where they are dealt with as MPDUs. Multiple MSDUs are aggregated at the MAC layer and pushed into a single MPDU (which is pushed to the PHY). They have a single frame header with multiple frames, and they are destined for the same client and the same service class. (Basically they all have the same TiD).
* The main motivation for aggregation at the MSDU layer is that: (1) Ethernet is the native frame format for most clients, (2) since the ethernet header is much smaller than the 802.11 header, we can combine multiple ethernet frames to form a single A-MSDU.

MPDU: MAC protocol data units are the frames passed from the MAC layers into the PHY layer.

AMPDU [1] : These are are the aggregated MPDU units which are pushed into a single PPDU (physical protocol data unit).  All frames will have a single PLCP header and preamble.

** The 802.11n system was designed so that either AMPDU, AMSDU or both aggregation algorithms could be used[3].


When do we want to use an A-MPDU and when do we want to use an A-MSDU:
Rather , the topic of this discussion should be why is AMPDU aggregation preferred over AMSDU aggregation most of the time. Or why most systems use AMPDU aggregation and not AMSDU aggregation.

A-MSDU increases the maximum frame transmission size from 2,304 bytes to almost 8k bytes (7935 to be exact) while A-MPDU allows up to 64k bytes.

However, the main problem with AMSDUs is that the entire blob becomes one MAC frame (or protocol data unit - PDU) and hence has only one CRC check. So as the frame size increase the probability of error increases. Since we have a single CRC check, we cannot retransmit a part of the AMSDU and in most cases this leads to re-transmission at lower rates which nullifies the benefit of aggregation. An AMPDU on the other hand consists of multiple PDUs each with their own CRCs. Hence, in the event of a failure, part of these can be retransmitted resulting in higher efficiency. However, this performance gain comes at a cost since now with every aggregate AMPDU we are sending MAC headers for all of the subframes.

Hence decision of using AMSDU versus AMPDU is a tradeoff between probability of error and retransmission costs in an AMSDU versus MAC frame header overheads in an aggregate with AMPDU. In most real world systems, the later wins, and hence most systems implement AMPDUs.

References:
[1] Ginzburg et. al, "Performance Analysis of A-MPDU and A-MSDU Aggregation in IEEE 802.11n", 2007.
 [2] Gautam Bhanage,"AMSDU vs AMPDU: A Brief Tutorial on WiFi Aggregation Support", Report number: GDB2017-004, arXiv:1704.07015 [cs.NI], April 2017.
[3] IEEE 802.11 standard, "https://en.wikipedia.org/wiki/IEEE_802.11".