'리눅스와윈도우비교'에 해당되는 글 1건

  1. 2008.08.18 리눅스와 윈도우 서버 성능 비교

http://www.webperformanceinc.com/library/reports/windows_vs_linux_part1/



  1. Throughput: How much response bandwidth the server is able to generate.
  2. CPU Utilization: How taxed is the server becoming.
  3. Errors per Second: How frequently is the server unable to respond.
  4. Requests per Second: At what rate is the web browser making requests from the server.
  5. Hits per Second: At what rate is the server responding to requests from users.
  6. Duration: How long can a user expect to wait before the page is complete.

Throughput

We start off with how much throughput the server was able to sustain under an increasing load. This measurement is taken from the total MegaBytes of HTTP traffic, and does not consider lower level implementation overhead.

The sudden drop-off in traffic is somewhat surprising. Examining the Tomcat server logs for Windows displays some OutOfMemoryExceptions as the server is pressed under increasing load. The same logs for Linux, however, revealed no illuminating information.

CPU Utilization

The CPU Utilization will give some insight as to how well the server was able to cope with the increasing load, and whether or not the server seemed too computationally overwhelmed to process any further users. Each server in the test was equipped with one CPU with HT Technology enabled. For simplicity, we will then examine an average of the processor load of the two reported virtual CPUs.

Evidently, both servers continued to consume the CPU linearly with load, up to roughly equal proportions. However, the load eases off just slightly, but only slightly, during the server's slump in throughput. As the throughput once again begins to increase, the CPU increases as well.

The still relatively high CPU utilization during the throughput slump confirms our suspicion that Tomcat and the JVM are still chugging along, evidently running memory management or optimization cycles.

Errors per second

During our testing, the only errors that made themselves evident to the end user were transmission errors occurring when a user would attempt to open a connection to the server. To them, this means their web browser will display an error message, usually similar to "Connection refused by server". Every page returned during this test was a complete page, and not an error page from the server. Please note that since the Windows server showed significantly more tendency towards generating errors, this plot is scaled logarithmically.

Server Responsiveness

Now, let us examine the number of completed requests per second as measured by the testing tool. This number is permitted to decline with the number of users as the server becomes unresponsive and users are forced to wait before they can make another request. Under normal circumstances however, the number of requests should be equal to the number of responses received, and should overall be directly proportional to the total throughput of the server.

Interestingly, our Windows server here elects to maintain a consistent level of responsiveness to the user, preventing the web browser from having to wait a significant amount of time for a response. During our performance slump, we note that the server simply refuses further connections from the user, giving them an immediate error. By contrast, our Linux server appears accept the connection, and only responding when it is free to do so completely.

Hits per Second

After having seen how quickly requests were being issued to the server, let us move to the response rate from the server. The Hits per Second measures the rate of completed HTTP responses that were received from the server.

It seems here that even during the slump, despite error handling techniques, the rate of successful HTTP messages processed remains roughly on par between our servers, once again showing a very slight favoring towards our Linux installation.

Duration

For our last performance aspect, we concern ourselves with how long is a user going to be taking in order to complete their task. We are measuring the full time until a response was received from a server, waiting until it has arrived before moving onto the next page of the test.

For the duration times for a business case, the baseline for each graph is defined as the amount of time the user spends "interacting" with the page before moving on to the next page. The simulated time spent by the user on the last page is therefore omitted.

The duration graphs here show the full range of measured durations during a given interval (dark background), with the averages on top (highlighted points).

Long Scenario

Medium Scenario

Short Scenario

In our long case (where the user must navigate through the longest list of pages) the anomalies have had a nice chance to average out through one page or another. As expected, we see the results of our throughput slump by increased wait times from the server. For Linux, as the load approaches this turning point, the duration is seen to increase, but never fully recovers. For our other cases, the general trend remains the same with the affect on the average duration declining with shorter test cases.

Static & Dynamic Content

From these cases we see some relatively consistent increases in linux duration of each business case, but the question remains, were those increases attributed to the server being swamped by multiple requests, or were they simply getting hung by the dynamically generated portions of the page. Please note that since the maximum durations are significantly different, these graphs are scaled logarithmically.

It is interesting to note here that there is a very slight but consistent increase in average durations for both servers, though more pronounced on our Linux server. Under load, the maximum duration for windows rarely peaks above 10 seconds, where Linux steadily maintains maximum durations over 100 seconds.




사용하기는 윈도우가 편하긴 한데

믿음은 리눅스가 좀더간다. 리눅스 공부해야하는데~~  지금 서버 1년만에 다운됐고..

Posted by [czar]
,