Deterministic Systems: Processes

Alex Egg,

This is the first part of a multi party study of RTOS systems. We begin by studying behavior in non RTOS systems such as Linux and then contrast w/ a real RTOS system, tinyos. Finally, we will explore ways to improve the non-determinic limitations of Linux by tweaking the Scheduling system.

Processes Modern, multi-process OSs CPU time is shared between processes. Under high load timers can be delayed showing non-deterministic behavior.

This paper is a study in real-time operating systems by contrasting the limited deterministic guarantees of timers in multi-process (non RTOS) operating systems like Linux. We explore the phenomenon of timer drift by developing a CPU bound-time heavy application that floods a network w/ UDP packets at a specified rate. At a high timer frequency, the effects of drift become apparent and an offset time becomes required to correct the drift. Also in order to send data at a rate greater than 200 Mbps multiple processes are required to maintain timing accuracy. Experimental results and analysis are included below.

Processes & Timers

One way to achieve concurrency is through the use of child processes. In theory timers should allow our programs to achieve some measure of determinism, for example, do routine X every Y seconds. However as processes share processor time and are often swapped in and out timer precision cannot be guaranteed, especially as CPU usage increases. For this project we created a network traffic generator which makes heavy use of timers and processes.

Algorithm

The program is a simple loop. It takes two arguments r and t which are megabits per second and total time respectively. The program then calculates at what time interval the loop needs to send a packet so that every second r megabits are transmitted:

time_interval = 1/packets_per_second

The data transmission algorithm is similar to this:

while(true)
	sendto(...)
	sleep(time_interval)
	
break if (t_elapsed)
end

Theoretically, this should send exactly r megabits per second, however, in practice this is not the case. You will notice that the bandwidth is smaller than r. This is because our initial theory was based on the premise that all the routine calls in the loop (sendto, sleep) return instantly, however, this is not true, calls to routines have a return time overhead attached to them. This overhead is not constant either, it will drift based on the load of the CPU as a growing number of processes compete for CPU time. For this reason, we must plan for this drift and compensate for it. In order to have a perfect compensation one would have to implement an adaptive algorithm which can increment/decrement the drift offset as conditions change at runtime.

Experimental Results

Processes

In order compare the analytical hypothesis from the above section I ran a series of trials in which I recorded the maximum bandwidth achievable per process.

Drawing
Figure 1 : Process vs Bandwidth

ProcessesMax Bandwidth (mean)
1180
2458
3611
4691
5695
6696
7713
8753
9781
10840
11860
12861
13860
14870
15806
16825

Table 1: Max bandwidth reached is around 825 where an asymptote is reached and adding more processes doesn’t help

From my experimental results the max bandwidth achievable with 1 process was roughly 180 mbps. In reality, it could have gone higher, but I would lose too large a factor of determinism, as at any rate above 180 mpbs on 1 process, the timer frequency dips below 30 us which I have established as a minimum threshold for precision. In other words any timing frequency below 30 us is not deterministic and the bandwidth varies by second to a degree where it is not reliable.

If the program bumps up to 2 processes the max bandwidth reached was around 458 mpbs – a more than double jump from 1 process. However, this growth rate isn’t sustained as a jump to 3 processes only nets 611 mbps and then reaches an asymptote of around 800 mbps as processes to infinity.

Drift Effects 1

The phenomenon of timer drift is best illustrated in the following experiment. I ran multiple trials of my program at increasing increments of expected bandwidth and record the actual bandwidth achieved. The results are interesting in that as expected bandwidth increased actual bandwidth lags proportionally, i.e. the lag is not constant:

alt text
Figure 2: Shows the lag of actual bandwidth vs expected

ExpectedActual
3227
6252
128100
256240
512500
1024800

Table 2: Actual vs expected bandwidth

At small bandwidths the difference between actual and expected is small, however, the lag grows as bandwidth increases and is exceptionally lagging at bandwidths above 512 mpbs. There are 2 possible explanations for this phenomenon.

First, as the requested bandwidth increases so does the # of processes the app forks and thereby increases the load on the CPU and thereby each process is given a more strict scheduling allotment; and as we discussed in the introduction this will lead to increasingly more non-deterministic timers, which would explain the lag shown in Figure 2.

Secondly, the constant drift may be the culprit. The app has a constant drift parameter which works fine for one process. However, as bandwidth goes up so do processes to handle the job, so thereby the load of each process decreases and with the constant drift in play across x processes the actual output will be delayed artificially. One solution to this would be an adaptive drift offset calculated at runtime as opposed to a constant.

Drift Effects 2

One issue that I had with the program was outputting a constant bandwidth consistently every second. As a process become more loaded or as a timer interval became smaller the output bandwidth would vary by larger deltas every second. One factor that may be attributing to this effect is once again timer issues:

Drawing
Figure 3: Variance (standard deviation) of bandwidth as a function of process number. Smaller bars are better – the more spread out the means high std. deviation which is not good.

<td colspan=2 >300</td><td colspan=2 >400</td><td colspan=3 >500</td><td colspan=3 >600</td><td colspan=4 >700</td>
Band. Mbps1 Process2 Process3 Process4 Process5 Process6 Process
1001.61.071110.7
2005.71.90.480.370.50.4
53.83.32.62.12
7.422.054.173.07
195.335.094.07
22.713.597.73.07
28.611.449.23

Table 2: Std. Deviation of bandwidth per processes. The empty cells mean that the given bandwidth isn’t possible w/ that number of processes

From my experimental results, it appears as though any timer frequency below 30 us is not sustainable and is non-deterministic. This is apparent from the high standard deviations in Figure 3, particularly for the 600 mpbs sample: at 3 processes, the 3 timers are maxed out at 30 us and therefore you see the high deviation and as processes goes up timer frequencies go down the the variance goes down also. This pattern is true for all bandwidth samples.

Conclusion

Our initial hypothesis was correct in that timer drift and CPU scheduling has a large effect on the determinism of our program. However, its effect was underestimated in that a constant drift offset was not good enough to compensate for the time-variable conditions of a modern OS. For example on one instance with the same r and t inputs a drift offset of D may work fine, however, a few days later those same configurations will not produce accurate outputs and you will need to adjust D for the same r and t inputs because maybe your computer is experiences a higher load for some reason (background updates, etc). However, with the application of child processes the effects of drift can mostly be mitigated as the task it paralyzed the the timer frequencies can go down, in other words: timer frequency is indirectly proportional to process count. On my system the max bandwidth produced was in between 800 and 900 mbps.

Appendix A

#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <math.h>
#include <arpa/inet.h>
#include <netinet/in.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <string.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/time.h>


#define BUFLEN 1514
#define MAX_PACKET 11776 //bits -- 1.4KB
#define OVERHEAD 40 //approx overhead for sendto return/usecond for this env. MAX is 40ish
#define SRV_IP "127.0.0.1"
#define PORT 32000
#define MICRO_PER_SECOND 1000000
#define MILLION 1000000
#define PROCESS_MBIT_THRESHOLD 180
#define SLEEP_MIN 30


void usage()
{
    printf("\n");
    printf("\n -r : Mbps ");
    printf("\n -t : Trial time ");
    
    printf("\n\n");
    
}




int main(int argc, char** argv)
{
    int i;
    //he wants 350 max
    //200mbps seems to be my max on 1 process
    int r; //mbps form terminal arg
    int t; //seconds form terminal arg
    //int workers=6;
    int packet_count=0;
    int overhead=OVERHEAD;
    
    
    if (argc < 3 )
    {
        usage();
        exit(0);
    }
    else
	{
        for (i=1; i < argc ; i+=2)
        {
            if (argv[i][0] == '-')
            {
                switch(argv[i][1])
                {
                    case 'r':
                        r = atoi(argv[i+1]);
                        break;
                    case 't':
                        t = atoi(argv[i+1]);
                        break;
                    default:
                        return(0);
                        break;
                }
                
            }
        }
    }


    //adaptive drift offset
    if(r<180)
    {
       overhead=41;
    }
    else if (r<458)
    {
        overhead=20;
    }
    else if (r<=600)
    {
        overhead=10;
    }
    else if(r<800)
    {
        overhead=1;
    }
    
    printf("overhead=%d\n",overhead);
    
    printf("r=%d t=%d\n", r,t);
    
    //socket stuff
    struct sockaddr_in si_other;
    int slen=sizeof(si_other);
    
    int _socket;


    char buf[BUFLEN];
    
    _socket=socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
    if (_socket==-1)
    {
        printf("Error: socket");
        exit(0);
    }
    memset((char *) &si_other, sizeof(si_other), 0);
    si_other.sin_family = AF_INET;
    si_other.sin_port = htons(PORT);
    if (inet_aton(SRV_IP, &si_other.sin_addr)==0)
    {
        fprintf(stderr, "inet_aton() failed\n");
        _exit(1);
    }
    
    
    //App. logic
    int r_bits = r * MILLION; //required bits/second
    
   
    int num_of_packets = r_bits / MAX_PACKET ; // packets/second
    
    double sleep_time = (1 / (double)num_of_packets);
    double usleep_time=sleep_time * MILLION; //sleep in us
    usleep_time= usleep_time - overhead ;
    
    int workers = ceil((float)SLEEP_MIN / usleep_time);
    printf("launching %d worker(s)\n", workers);


    usleep_time=usleep_time * workers; //scale for parallel workers
    
    if(usleep_time < SLEEP_MIN)
    {
        printf("WARNING: usleep_time < 30us -- this percission is not reliable: %lf, exiting\n", usleep_time);
        return 0;
    }
    
    printf("%d / %d = %d, sleep=%lf\n", r_bits, MAX_PACKET, num_of_packets, usleep_time);
    
    for(int i=0; i<workers; i++)
    {
        if (fork()==0)
        {
            printf("child started: pid=%d, w/ sleep:%lf\n", getpid(), usleep_time);
            struct timeval start_time;
            struct timeval stop_time;
            float elapsed;
            gettimeofday( &start_time, NULL );
            
            //do work
            for(;;)
            {
                if (sendto(_socket, buf, BUFLEN, 0, (struct sockaddr *)&si_other, slen)==-1)
                    printf("Error: sendto()\n");
                    
                packet_count++;
                usleep(usleep_time);
                
                gettimeofday( &stop_time, NULL );
                
                elapsed = (float)(stop_time.tv_sec  - start_time.tv_sec);
                elapsed += (stop_time.tv_usec - start_time.tv_usec)/(float)MICRO_PER_SECOND;
                
                if(elapsed>=(double) t)
                    break;
                
            }
            
            printf("child closed: %d - %d packets sent\n", getpid(), packet_count);
            _exit(0);
            //return 0;
        }
        else
        {
            //printf("i am parent, pid=%d\n", getpid());
        }
    }
    
    printf("waiting for children\n");
    wait(NULL);
    close(_socket);
    
    printf("exiting main\n");
    return 0;


}
For the next part of this 4 part series, click here: http://www.eggie5.com/87-deterministic-systems-semaphore


Permalink: deterministic-systems-processes

Tags:

Last edited by Alex Egg, 2016-11-05 22:56:26
View Revision History