Why is this code 6.5x slower with optimizations enabled?Unit Testing C CodeWith arrays, why is it the case that a[5] == 5[a]?Why doesn't GCC optimize a*a*a*a*a*a to (a*a*a)*(a*a*a)?Why are elementwise additions much faster in separate loops than in a combined loop?What is “:-!!” in C code?Why is my program slow when looping over exactly 8192 elements?Obfuscated C Code Contest 2006. Please explain sykes2.cWhy does the C preprocessor interpret the word “linux” as the constant “1”?Why does GCC generate 15-20% faster code if I optimize for size instead of speed?How is the linking done for string functions in C?

Copenhagen passport control - US citizen

Concept of linear mappings are confusing me

The use of multiple foreign keys on same column in SQL Server

What makes Graph invariants so useful/important?

Schwarzchild Radius of the Universe

I probably found a bug with the sudo apt install function

"which" command doesn't work / path of Safari?

Prevent a directory in /tmp from being deleted

What do you call a Matrix-like slowdown and camera movement effect?

Do airline pilots ever risk not hearing communication directed to them specifically, from traffic controllers?

How to type dʒ symbol (IPA) on Mac?

Why is "Reports" in sentence down without "The"

When blogging recipes, how can I support both readers who want the narrative/journey and ones who want the printer-friendly recipe?

Copycat chess is back

Why Is Death Allowed In the Matrix?

Draw simple lines in Inkscape

XeLaTeX and pdfLaTeX ignore hyphenation

How is this relation reflexive?

What is GPS' 19 year rollover and does it present a cybersecurity issue?

A Journey Through Space and Time

declaring a variable twice in IIFE

How can I fix this gap between bookcases I made?

I see my dog run

Extreme, but not acceptable situation and I can't start the work tomorrow morning



Why is this code 6.5x slower with optimizations enabled?


Unit Testing C CodeWith arrays, why is it the case that a[5] == 5[a]?Why doesn't GCC optimize a*a*a*a*a*a to (a*a*a)*(a*a*a)?Why are elementwise additions much faster in separate loops than in a combined loop?What is “:-!!” in C code?Why is my program slow when looping over exactly 8192 elements?Obfuscated C Code Contest 2006. Please explain sykes2.cWhy does the C preprocessor interpret the word “linux” as the constant “1”?Why does GCC generate 15-20% faster code if I optimize for size instead of speed?How is the linking done for string functions in C?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








18















I wanted to benchmark glibc's strlen function for some reason and found out it apparently performs much slower with optimizations enabled in GCC and I have no idea why.



Here's my code:



#include <time.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>

int main()
char *s = calloc(1 << 20, 1);
memset(s, 65, 1000000);
clock_t start = clock();
for (int i = 0; i < 128; ++i)
s[strlen(s)] = 'A';

clock_t end = clock();
printf("%lldn", (long long)(end-start));
return 0;



On my machine it outputs:



$ gcc test.c && ./a.out
13336
$ gcc -O1 test.c && ./a.out
199004
$ gcc -O2 test.c && ./a.out
83415
$ gcc -O3 test.c && ./a.out
83415


Somehow, enabling optimizations causes it to execute longer.










share|improve this question
























  • With gcc-8.2 debug version takes 51334, release 8246. Release compiler options -O3 -march=native -DNDEBUG

    – Maxim Egorushkin
    7 hours ago







  • 1





    Please report it to gcc's bugzilla.

    – Marc Glisse
    7 hours ago






  • 1





    Using -fno-builtin makes the problem go away. So presumably the issue is that in this particular instance, GCC's builtin strlen is slower than the library's.

    – David Schwartz
    7 hours ago











  • It is generating repnz scasb for strlen at -O1.

    – Marc Glisse
    7 hours ago












  • @MarcGlisse and for -O2 and -O3, it's loading and comparing the chars as integers. Unfortunately, the naive -O0 uses the library function which uses vector-instructions that beat this optimization easily.

    – EOF
    7 hours ago


















18















I wanted to benchmark glibc's strlen function for some reason and found out it apparently performs much slower with optimizations enabled in GCC and I have no idea why.



Here's my code:



#include <time.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>

int main()
char *s = calloc(1 << 20, 1);
memset(s, 65, 1000000);
clock_t start = clock();
for (int i = 0; i < 128; ++i)
s[strlen(s)] = 'A';

clock_t end = clock();
printf("%lldn", (long long)(end-start));
return 0;



On my machine it outputs:



$ gcc test.c && ./a.out
13336
$ gcc -O1 test.c && ./a.out
199004
$ gcc -O2 test.c && ./a.out
83415
$ gcc -O3 test.c && ./a.out
83415


Somehow, enabling optimizations causes it to execute longer.










share|improve this question
























  • With gcc-8.2 debug version takes 51334, release 8246. Release compiler options -O3 -march=native -DNDEBUG

    – Maxim Egorushkin
    7 hours ago







  • 1





    Please report it to gcc's bugzilla.

    – Marc Glisse
    7 hours ago






  • 1





    Using -fno-builtin makes the problem go away. So presumably the issue is that in this particular instance, GCC's builtin strlen is slower than the library's.

    – David Schwartz
    7 hours ago











  • It is generating repnz scasb for strlen at -O1.

    – Marc Glisse
    7 hours ago












  • @MarcGlisse and for -O2 and -O3, it's loading and comparing the chars as integers. Unfortunately, the naive -O0 uses the library function which uses vector-instructions that beat this optimization easily.

    – EOF
    7 hours ago














18












18








18








I wanted to benchmark glibc's strlen function for some reason and found out it apparently performs much slower with optimizations enabled in GCC and I have no idea why.



Here's my code:



#include <time.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>

int main()
char *s = calloc(1 << 20, 1);
memset(s, 65, 1000000);
clock_t start = clock();
for (int i = 0; i < 128; ++i)
s[strlen(s)] = 'A';

clock_t end = clock();
printf("%lldn", (long long)(end-start));
return 0;



On my machine it outputs:



$ gcc test.c && ./a.out
13336
$ gcc -O1 test.c && ./a.out
199004
$ gcc -O2 test.c && ./a.out
83415
$ gcc -O3 test.c && ./a.out
83415


Somehow, enabling optimizations causes it to execute longer.










share|improve this question
















I wanted to benchmark glibc's strlen function for some reason and found out it apparently performs much slower with optimizations enabled in GCC and I have no idea why.



Here's my code:



#include <time.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>

int main()
char *s = calloc(1 << 20, 1);
memset(s, 65, 1000000);
clock_t start = clock();
for (int i = 0; i < 128; ++i)
s[strlen(s)] = 'A';

clock_t end = clock();
printf("%lldn", (long long)(end-start));
return 0;



On my machine it outputs:



$ gcc test.c && ./a.out
13336
$ gcc -O1 test.c && ./a.out
199004
$ gcc -O2 test.c && ./a.out
83415
$ gcc -O3 test.c && ./a.out
83415


Somehow, enabling optimizations causes it to execute longer.







c gcc glibc






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 8 hours ago









Fei Xiang

2,1634822




2,1634822










asked 8 hours ago









TsarNTsarN

9316




9316












  • With gcc-8.2 debug version takes 51334, release 8246. Release compiler options -O3 -march=native -DNDEBUG

    – Maxim Egorushkin
    7 hours ago







  • 1





    Please report it to gcc's bugzilla.

    – Marc Glisse
    7 hours ago






  • 1





    Using -fno-builtin makes the problem go away. So presumably the issue is that in this particular instance, GCC's builtin strlen is slower than the library's.

    – David Schwartz
    7 hours ago











  • It is generating repnz scasb for strlen at -O1.

    – Marc Glisse
    7 hours ago












  • @MarcGlisse and for -O2 and -O3, it's loading and comparing the chars as integers. Unfortunately, the naive -O0 uses the library function which uses vector-instructions that beat this optimization easily.

    – EOF
    7 hours ago


















  • With gcc-8.2 debug version takes 51334, release 8246. Release compiler options -O3 -march=native -DNDEBUG

    – Maxim Egorushkin
    7 hours ago







  • 1





    Please report it to gcc's bugzilla.

    – Marc Glisse
    7 hours ago






  • 1





    Using -fno-builtin makes the problem go away. So presumably the issue is that in this particular instance, GCC's builtin strlen is slower than the library's.

    – David Schwartz
    7 hours ago











  • It is generating repnz scasb for strlen at -O1.

    – Marc Glisse
    7 hours ago












  • @MarcGlisse and for -O2 and -O3, it's loading and comparing the chars as integers. Unfortunately, the naive -O0 uses the library function which uses vector-instructions that beat this optimization easily.

    – EOF
    7 hours ago

















With gcc-8.2 debug version takes 51334, release 8246. Release compiler options -O3 -march=native -DNDEBUG

– Maxim Egorushkin
7 hours ago






With gcc-8.2 debug version takes 51334, release 8246. Release compiler options -O3 -march=native -DNDEBUG

– Maxim Egorushkin
7 hours ago





1




1





Please report it to gcc's bugzilla.

– Marc Glisse
7 hours ago





Please report it to gcc's bugzilla.

– Marc Glisse
7 hours ago




1




1





Using -fno-builtin makes the problem go away. So presumably the issue is that in this particular instance, GCC's builtin strlen is slower than the library's.

– David Schwartz
7 hours ago





Using -fno-builtin makes the problem go away. So presumably the issue is that in this particular instance, GCC's builtin strlen is slower than the library's.

– David Schwartz
7 hours ago













It is generating repnz scasb for strlen at -O1.

– Marc Glisse
7 hours ago






It is generating repnz scasb for strlen at -O1.

– Marc Glisse
7 hours ago














@MarcGlisse and for -O2 and -O3, it's loading and comparing the chars as integers. Unfortunately, the naive -O0 uses the library function which uses vector-instructions that beat this optimization easily.

– EOF
7 hours ago






@MarcGlisse and for -O2 and -O3, it's loading and comparing the chars as integers. Unfortunately, the naive -O0 uses the library function which uses vector-instructions that beat this optimization easily.

– EOF
7 hours ago













1 Answer
1






active

oldest

votes


















16














Testing your code on Godbolt's Compiler Explorer provides this explanation:



  • at -O0 or without optimisations, the generated code call the C library function strlen

  • at -O1 the generated code uses a simple inline expansion using a rep scasb instruction.

  • at -O2 and above, the generated code uses a more elaborate inline expansion.

Benchmarking your code repeatedly shows a substantial variation from one run to another, but increasing the number of iterations shows that:



  • the -O1 code is much slower than the C library implementation: 32240 vs 3090

  • the -O2 code is faster than the -O1 but still substantially slower than the C ibrary code: 8570 vs 3090.

This behavior is specific to gcc and the glibc. The same test on OS/X with clang and Apple's Libc does not show a significant difference, which is not a surprise as Godbolt shows that clang generates a call to the C library strlen at all optimisation levels.



This could be considered a bug in gcc/glibc but more extensive benchmarking might show that the overhead of calling strlen has a more important impact than the lack of performance of the inline code for small strings. The strings on which you benchmark are uncommonly large, so focusing the benchmark on ultra-long strings might not give meaningful results.



I updated the benchmark for smaller strings and it shows similar performance for string lengths varying from 0 to 100 at -O0 and -O2 but still a much worse performance at -O1, 3 times slower.



Here is the updated code:



#include <stdlib.h>
#include <string.h>
#include <time.h>

void benchmark(int repeat, int minlen, int maxlen)
char *s = malloc(maxlen + 1);
memset(s, 'A', minlen);
long long bytes = 0, calls = 0;
clock_t clk = clock();
for (int n = 0; n < repeat; n++)
for (int i = minlen; i < maxlen; ++i)
bytes += i + 1;
calls += 1;
s[i] = '';
s[strlen(s)] = 'A';


clk = clock() - clk;
free(s);
double avglen = (minlen + maxlen - 1) / 2.0;
double ns = (double)clk * 1e9 / CLOCKS_PER_SEC;
printf("average length %7.0f -> avg time: %7.3f ns/byte, %7.3f ns/calln",
avglen, ns / bytes, ns / calls);


int main()
benchmark(10000000, 0, 1);
benchmark(1000000, 0, 10);
benchmark(1000000, 5, 15);
benchmark(100000, 0, 100);
benchmark(100000, 50, 150);
benchmark(10000, 0, 1000);
benchmark(10000, 500, 1500);
benchmark(1000, 0, 10000);
benchmark(1000, 5000, 15000);
benchmark(100, 1000000 - 50, 1000000 + 50);
return 0;



Here is the output:




chqrlie> gcc -std=c99 -O0 benchstrlen.c && ./a.out
average length 0 -> avg time: 14.000 ns/byte, 14.000 ns/call
average length 4 -> avg time: 2.364 ns/byte, 13.000 ns/call
average length 10 -> avg time: 1.238 ns/byte, 13.000 ns/call
average length 50 -> avg time: 0.317 ns/byte, 16.000 ns/call
average length 100 -> avg time: 0.169 ns/byte, 17.000 ns/call
average length 500 -> avg time: 0.074 ns/byte, 37.000 ns/call
average length 1000 -> avg time: 0.068 ns/byte, 68.000 ns/call
average length 5000 -> avg time: 0.064 ns/byte, 318.000 ns/call
average length 10000 -> avg time: 0.062 ns/byte, 622.000 ns/call
average length 1000000 -> avg time: 0.062 ns/byte, 62000.000 ns/call
chqrlie> gcc -std=c99 -O1 benchstrlen.c && ./a.out
average length 0 -> avg time: 20.000 ns/byte, 20.000 ns/call
average length 4 -> avg time: 3.818 ns/byte, 21.000 ns/call
average length 10 -> avg time: 2.190 ns/byte, 23.000 ns/call
average length 50 -> avg time: 0.990 ns/byte, 50.000 ns/call
average length 100 -> avg time: 0.816 ns/byte, 82.000 ns/call
average length 500 -> avg time: 0.679 ns/byte, 340.000 ns/call
average length 1000 -> avg time: 0.664 ns/byte, 664.000 ns/call
average length 5000 -> avg time: 0.651 ns/byte, 3254.000 ns/call
average length 10000 -> avg time: 0.649 ns/byte, 6491.000 ns/call
average length 1000000 -> avg time: 0.648 ns/byte, 648000.000 ns/call
chqrlie> gcc -std=c99 -O2 benchstrlen.c && ./a.out
average length 0 -> avg time: 10.000 ns/byte, 10.000 ns/call
average length 4 -> avg time: 2.000 ns/byte, 11.000 ns/call
average length 10 -> avg time: 1.048 ns/byte, 11.000 ns/call
average length 50 -> avg time: 0.337 ns/byte, 17.000 ns/call
average length 100 -> avg time: 0.299 ns/byte, 30.000 ns/call
average length 500 -> avg time: 0.202 ns/byte, 101.000 ns/call
average length 1000 -> avg time: 0.188 ns/byte, 188.000 ns/call
average length 5000 -> avg time: 0.174 ns/byte, 868.000 ns/call
average length 10000 -> avg time: 0.172 ns/byte, 1716.000 ns/call
average length 1000000 -> avg time: 0.172 ns/byte, 172000.000 ns/call





share|improve this answer

























  • Wouldn't it still be better for the inlined version to use the same optimizations as the library strlen, giving the best of both worlds?

    – Daniel H
    7 hours ago






  • 1





    It would, but the hand optimized version in the C library might be larger and more complicated to inline. I have not looked into this recently, but there used to be a mix of complex platform specific macros in <string.h> and hard coded optimisations in the gcc code generator. Definitely still room for improvement on intel targets.

    – chqrlie
    7 hours ago











  • Does it change if you use -march=native -mtune=native?

    – Deduplicator
    6 hours ago











  • Note that the GNU C library function for strlen() is likely optimised for extremely large strings (that no sane programmer will care about) at the expense of performance for small strings (that are extremely common); and the optimisations done by the library version should never be done. The problem here is that the OP's code doesn't keep track of the string's length itself (e.g. with an int len; variable) and should not have used strlen() at all, making the code so bad for performance that "optimised for something no sane programmer would care about" actually helped.

    – Brendan
    6 hours ago







  • 1





    @chqrlie: I'd also say that this is partly a symptom of a larger problem - code in libraries can't be optimised for any specific case and therefore must be "un-optimal" for some cases. To work around that it would've been nice if there was a strlen_small() and a separate strlen_large(), but there isn't.

    – Brendan
    4 hours ago











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55563598%2fwhy-is-this-code-6-5x-slower-with-optimizations-enabled%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









16














Testing your code on Godbolt's Compiler Explorer provides this explanation:



  • at -O0 or without optimisations, the generated code call the C library function strlen

  • at -O1 the generated code uses a simple inline expansion using a rep scasb instruction.

  • at -O2 and above, the generated code uses a more elaborate inline expansion.

Benchmarking your code repeatedly shows a substantial variation from one run to another, but increasing the number of iterations shows that:



  • the -O1 code is much slower than the C library implementation: 32240 vs 3090

  • the -O2 code is faster than the -O1 but still substantially slower than the C ibrary code: 8570 vs 3090.

This behavior is specific to gcc and the glibc. The same test on OS/X with clang and Apple's Libc does not show a significant difference, which is not a surprise as Godbolt shows that clang generates a call to the C library strlen at all optimisation levels.



This could be considered a bug in gcc/glibc but more extensive benchmarking might show that the overhead of calling strlen has a more important impact than the lack of performance of the inline code for small strings. The strings on which you benchmark are uncommonly large, so focusing the benchmark on ultra-long strings might not give meaningful results.



I updated the benchmark for smaller strings and it shows similar performance for string lengths varying from 0 to 100 at -O0 and -O2 but still a much worse performance at -O1, 3 times slower.



Here is the updated code:



#include <stdlib.h>
#include <string.h>
#include <time.h>

void benchmark(int repeat, int minlen, int maxlen)
char *s = malloc(maxlen + 1);
memset(s, 'A', minlen);
long long bytes = 0, calls = 0;
clock_t clk = clock();
for (int n = 0; n < repeat; n++)
for (int i = minlen; i < maxlen; ++i)
bytes += i + 1;
calls += 1;
s[i] = '';
s[strlen(s)] = 'A';


clk = clock() - clk;
free(s);
double avglen = (minlen + maxlen - 1) / 2.0;
double ns = (double)clk * 1e9 / CLOCKS_PER_SEC;
printf("average length %7.0f -> avg time: %7.3f ns/byte, %7.3f ns/calln",
avglen, ns / bytes, ns / calls);


int main()
benchmark(10000000, 0, 1);
benchmark(1000000, 0, 10);
benchmark(1000000, 5, 15);
benchmark(100000, 0, 100);
benchmark(100000, 50, 150);
benchmark(10000, 0, 1000);
benchmark(10000, 500, 1500);
benchmark(1000, 0, 10000);
benchmark(1000, 5000, 15000);
benchmark(100, 1000000 - 50, 1000000 + 50);
return 0;



Here is the output:




chqrlie> gcc -std=c99 -O0 benchstrlen.c && ./a.out
average length 0 -> avg time: 14.000 ns/byte, 14.000 ns/call
average length 4 -> avg time: 2.364 ns/byte, 13.000 ns/call
average length 10 -> avg time: 1.238 ns/byte, 13.000 ns/call
average length 50 -> avg time: 0.317 ns/byte, 16.000 ns/call
average length 100 -> avg time: 0.169 ns/byte, 17.000 ns/call
average length 500 -> avg time: 0.074 ns/byte, 37.000 ns/call
average length 1000 -> avg time: 0.068 ns/byte, 68.000 ns/call
average length 5000 -> avg time: 0.064 ns/byte, 318.000 ns/call
average length 10000 -> avg time: 0.062 ns/byte, 622.000 ns/call
average length 1000000 -> avg time: 0.062 ns/byte, 62000.000 ns/call
chqrlie> gcc -std=c99 -O1 benchstrlen.c && ./a.out
average length 0 -> avg time: 20.000 ns/byte, 20.000 ns/call
average length 4 -> avg time: 3.818 ns/byte, 21.000 ns/call
average length 10 -> avg time: 2.190 ns/byte, 23.000 ns/call
average length 50 -> avg time: 0.990 ns/byte, 50.000 ns/call
average length 100 -> avg time: 0.816 ns/byte, 82.000 ns/call
average length 500 -> avg time: 0.679 ns/byte, 340.000 ns/call
average length 1000 -> avg time: 0.664 ns/byte, 664.000 ns/call
average length 5000 -> avg time: 0.651 ns/byte, 3254.000 ns/call
average length 10000 -> avg time: 0.649 ns/byte, 6491.000 ns/call
average length 1000000 -> avg time: 0.648 ns/byte, 648000.000 ns/call
chqrlie> gcc -std=c99 -O2 benchstrlen.c && ./a.out
average length 0 -> avg time: 10.000 ns/byte, 10.000 ns/call
average length 4 -> avg time: 2.000 ns/byte, 11.000 ns/call
average length 10 -> avg time: 1.048 ns/byte, 11.000 ns/call
average length 50 -> avg time: 0.337 ns/byte, 17.000 ns/call
average length 100 -> avg time: 0.299 ns/byte, 30.000 ns/call
average length 500 -> avg time: 0.202 ns/byte, 101.000 ns/call
average length 1000 -> avg time: 0.188 ns/byte, 188.000 ns/call
average length 5000 -> avg time: 0.174 ns/byte, 868.000 ns/call
average length 10000 -> avg time: 0.172 ns/byte, 1716.000 ns/call
average length 1000000 -> avg time: 0.172 ns/byte, 172000.000 ns/call





share|improve this answer

























  • Wouldn't it still be better for the inlined version to use the same optimizations as the library strlen, giving the best of both worlds?

    – Daniel H
    7 hours ago






  • 1





    It would, but the hand optimized version in the C library might be larger and more complicated to inline. I have not looked into this recently, but there used to be a mix of complex platform specific macros in <string.h> and hard coded optimisations in the gcc code generator. Definitely still room for improvement on intel targets.

    – chqrlie
    7 hours ago











  • Does it change if you use -march=native -mtune=native?

    – Deduplicator
    6 hours ago











  • Note that the GNU C library function for strlen() is likely optimised for extremely large strings (that no sane programmer will care about) at the expense of performance for small strings (that are extremely common); and the optimisations done by the library version should never be done. The problem here is that the OP's code doesn't keep track of the string's length itself (e.g. with an int len; variable) and should not have used strlen() at all, making the code so bad for performance that "optimised for something no sane programmer would care about" actually helped.

    – Brendan
    6 hours ago







  • 1





    @chqrlie: I'd also say that this is partly a symptom of a larger problem - code in libraries can't be optimised for any specific case and therefore must be "un-optimal" for some cases. To work around that it would've been nice if there was a strlen_small() and a separate strlen_large(), but there isn't.

    – Brendan
    4 hours ago















16














Testing your code on Godbolt's Compiler Explorer provides this explanation:



  • at -O0 or without optimisations, the generated code call the C library function strlen

  • at -O1 the generated code uses a simple inline expansion using a rep scasb instruction.

  • at -O2 and above, the generated code uses a more elaborate inline expansion.

Benchmarking your code repeatedly shows a substantial variation from one run to another, but increasing the number of iterations shows that:



  • the -O1 code is much slower than the C library implementation: 32240 vs 3090

  • the -O2 code is faster than the -O1 but still substantially slower than the C ibrary code: 8570 vs 3090.

This behavior is specific to gcc and the glibc. The same test on OS/X with clang and Apple's Libc does not show a significant difference, which is not a surprise as Godbolt shows that clang generates a call to the C library strlen at all optimisation levels.



This could be considered a bug in gcc/glibc but more extensive benchmarking might show that the overhead of calling strlen has a more important impact than the lack of performance of the inline code for small strings. The strings on which you benchmark are uncommonly large, so focusing the benchmark on ultra-long strings might not give meaningful results.



I updated the benchmark for smaller strings and it shows similar performance for string lengths varying from 0 to 100 at -O0 and -O2 but still a much worse performance at -O1, 3 times slower.



Here is the updated code:



#include <stdlib.h>
#include <string.h>
#include <time.h>

void benchmark(int repeat, int minlen, int maxlen)
char *s = malloc(maxlen + 1);
memset(s, 'A', minlen);
long long bytes = 0, calls = 0;
clock_t clk = clock();
for (int n = 0; n < repeat; n++)
for (int i = minlen; i < maxlen; ++i)
bytes += i + 1;
calls += 1;
s[i] = '';
s[strlen(s)] = 'A';


clk = clock() - clk;
free(s);
double avglen = (minlen + maxlen - 1) / 2.0;
double ns = (double)clk * 1e9 / CLOCKS_PER_SEC;
printf("average length %7.0f -> avg time: %7.3f ns/byte, %7.3f ns/calln",
avglen, ns / bytes, ns / calls);


int main()
benchmark(10000000, 0, 1);
benchmark(1000000, 0, 10);
benchmark(1000000, 5, 15);
benchmark(100000, 0, 100);
benchmark(100000, 50, 150);
benchmark(10000, 0, 1000);
benchmark(10000, 500, 1500);
benchmark(1000, 0, 10000);
benchmark(1000, 5000, 15000);
benchmark(100, 1000000 - 50, 1000000 + 50);
return 0;



Here is the output:




chqrlie> gcc -std=c99 -O0 benchstrlen.c && ./a.out
average length 0 -> avg time: 14.000 ns/byte, 14.000 ns/call
average length 4 -> avg time: 2.364 ns/byte, 13.000 ns/call
average length 10 -> avg time: 1.238 ns/byte, 13.000 ns/call
average length 50 -> avg time: 0.317 ns/byte, 16.000 ns/call
average length 100 -> avg time: 0.169 ns/byte, 17.000 ns/call
average length 500 -> avg time: 0.074 ns/byte, 37.000 ns/call
average length 1000 -> avg time: 0.068 ns/byte, 68.000 ns/call
average length 5000 -> avg time: 0.064 ns/byte, 318.000 ns/call
average length 10000 -> avg time: 0.062 ns/byte, 622.000 ns/call
average length 1000000 -> avg time: 0.062 ns/byte, 62000.000 ns/call
chqrlie> gcc -std=c99 -O1 benchstrlen.c && ./a.out
average length 0 -> avg time: 20.000 ns/byte, 20.000 ns/call
average length 4 -> avg time: 3.818 ns/byte, 21.000 ns/call
average length 10 -> avg time: 2.190 ns/byte, 23.000 ns/call
average length 50 -> avg time: 0.990 ns/byte, 50.000 ns/call
average length 100 -> avg time: 0.816 ns/byte, 82.000 ns/call
average length 500 -> avg time: 0.679 ns/byte, 340.000 ns/call
average length 1000 -> avg time: 0.664 ns/byte, 664.000 ns/call
average length 5000 -> avg time: 0.651 ns/byte, 3254.000 ns/call
average length 10000 -> avg time: 0.649 ns/byte, 6491.000 ns/call
average length 1000000 -> avg time: 0.648 ns/byte, 648000.000 ns/call
chqrlie> gcc -std=c99 -O2 benchstrlen.c && ./a.out
average length 0 -> avg time: 10.000 ns/byte, 10.000 ns/call
average length 4 -> avg time: 2.000 ns/byte, 11.000 ns/call
average length 10 -> avg time: 1.048 ns/byte, 11.000 ns/call
average length 50 -> avg time: 0.337 ns/byte, 17.000 ns/call
average length 100 -> avg time: 0.299 ns/byte, 30.000 ns/call
average length 500 -> avg time: 0.202 ns/byte, 101.000 ns/call
average length 1000 -> avg time: 0.188 ns/byte, 188.000 ns/call
average length 5000 -> avg time: 0.174 ns/byte, 868.000 ns/call
average length 10000 -> avg time: 0.172 ns/byte, 1716.000 ns/call
average length 1000000 -> avg time: 0.172 ns/byte, 172000.000 ns/call





share|improve this answer

























  • Wouldn't it still be better for the inlined version to use the same optimizations as the library strlen, giving the best of both worlds?

    – Daniel H
    7 hours ago






  • 1





    It would, but the hand optimized version in the C library might be larger and more complicated to inline. I have not looked into this recently, but there used to be a mix of complex platform specific macros in <string.h> and hard coded optimisations in the gcc code generator. Definitely still room for improvement on intel targets.

    – chqrlie
    7 hours ago











  • Does it change if you use -march=native -mtune=native?

    – Deduplicator
    6 hours ago











  • Note that the GNU C library function for strlen() is likely optimised for extremely large strings (that no sane programmer will care about) at the expense of performance for small strings (that are extremely common); and the optimisations done by the library version should never be done. The problem here is that the OP's code doesn't keep track of the string's length itself (e.g. with an int len; variable) and should not have used strlen() at all, making the code so bad for performance that "optimised for something no sane programmer would care about" actually helped.

    – Brendan
    6 hours ago







  • 1





    @chqrlie: I'd also say that this is partly a symptom of a larger problem - code in libraries can't be optimised for any specific case and therefore must be "un-optimal" for some cases. To work around that it would've been nice if there was a strlen_small() and a separate strlen_large(), but there isn't.

    – Brendan
    4 hours ago













16












16








16







Testing your code on Godbolt's Compiler Explorer provides this explanation:



  • at -O0 or without optimisations, the generated code call the C library function strlen

  • at -O1 the generated code uses a simple inline expansion using a rep scasb instruction.

  • at -O2 and above, the generated code uses a more elaborate inline expansion.

Benchmarking your code repeatedly shows a substantial variation from one run to another, but increasing the number of iterations shows that:



  • the -O1 code is much slower than the C library implementation: 32240 vs 3090

  • the -O2 code is faster than the -O1 but still substantially slower than the C ibrary code: 8570 vs 3090.

This behavior is specific to gcc and the glibc. The same test on OS/X with clang and Apple's Libc does not show a significant difference, which is not a surprise as Godbolt shows that clang generates a call to the C library strlen at all optimisation levels.



This could be considered a bug in gcc/glibc but more extensive benchmarking might show that the overhead of calling strlen has a more important impact than the lack of performance of the inline code for small strings. The strings on which you benchmark are uncommonly large, so focusing the benchmark on ultra-long strings might not give meaningful results.



I updated the benchmark for smaller strings and it shows similar performance for string lengths varying from 0 to 100 at -O0 and -O2 but still a much worse performance at -O1, 3 times slower.



Here is the updated code:



#include <stdlib.h>
#include <string.h>
#include <time.h>

void benchmark(int repeat, int minlen, int maxlen)
char *s = malloc(maxlen + 1);
memset(s, 'A', minlen);
long long bytes = 0, calls = 0;
clock_t clk = clock();
for (int n = 0; n < repeat; n++)
for (int i = minlen; i < maxlen; ++i)
bytes += i + 1;
calls += 1;
s[i] = '';
s[strlen(s)] = 'A';


clk = clock() - clk;
free(s);
double avglen = (minlen + maxlen - 1) / 2.0;
double ns = (double)clk * 1e9 / CLOCKS_PER_SEC;
printf("average length %7.0f -> avg time: %7.3f ns/byte, %7.3f ns/calln",
avglen, ns / bytes, ns / calls);


int main()
benchmark(10000000, 0, 1);
benchmark(1000000, 0, 10);
benchmark(1000000, 5, 15);
benchmark(100000, 0, 100);
benchmark(100000, 50, 150);
benchmark(10000, 0, 1000);
benchmark(10000, 500, 1500);
benchmark(1000, 0, 10000);
benchmark(1000, 5000, 15000);
benchmark(100, 1000000 - 50, 1000000 + 50);
return 0;



Here is the output:




chqrlie> gcc -std=c99 -O0 benchstrlen.c && ./a.out
average length 0 -> avg time: 14.000 ns/byte, 14.000 ns/call
average length 4 -> avg time: 2.364 ns/byte, 13.000 ns/call
average length 10 -> avg time: 1.238 ns/byte, 13.000 ns/call
average length 50 -> avg time: 0.317 ns/byte, 16.000 ns/call
average length 100 -> avg time: 0.169 ns/byte, 17.000 ns/call
average length 500 -> avg time: 0.074 ns/byte, 37.000 ns/call
average length 1000 -> avg time: 0.068 ns/byte, 68.000 ns/call
average length 5000 -> avg time: 0.064 ns/byte, 318.000 ns/call
average length 10000 -> avg time: 0.062 ns/byte, 622.000 ns/call
average length 1000000 -> avg time: 0.062 ns/byte, 62000.000 ns/call
chqrlie> gcc -std=c99 -O1 benchstrlen.c && ./a.out
average length 0 -> avg time: 20.000 ns/byte, 20.000 ns/call
average length 4 -> avg time: 3.818 ns/byte, 21.000 ns/call
average length 10 -> avg time: 2.190 ns/byte, 23.000 ns/call
average length 50 -> avg time: 0.990 ns/byte, 50.000 ns/call
average length 100 -> avg time: 0.816 ns/byte, 82.000 ns/call
average length 500 -> avg time: 0.679 ns/byte, 340.000 ns/call
average length 1000 -> avg time: 0.664 ns/byte, 664.000 ns/call
average length 5000 -> avg time: 0.651 ns/byte, 3254.000 ns/call
average length 10000 -> avg time: 0.649 ns/byte, 6491.000 ns/call
average length 1000000 -> avg time: 0.648 ns/byte, 648000.000 ns/call
chqrlie> gcc -std=c99 -O2 benchstrlen.c && ./a.out
average length 0 -> avg time: 10.000 ns/byte, 10.000 ns/call
average length 4 -> avg time: 2.000 ns/byte, 11.000 ns/call
average length 10 -> avg time: 1.048 ns/byte, 11.000 ns/call
average length 50 -> avg time: 0.337 ns/byte, 17.000 ns/call
average length 100 -> avg time: 0.299 ns/byte, 30.000 ns/call
average length 500 -> avg time: 0.202 ns/byte, 101.000 ns/call
average length 1000 -> avg time: 0.188 ns/byte, 188.000 ns/call
average length 5000 -> avg time: 0.174 ns/byte, 868.000 ns/call
average length 10000 -> avg time: 0.172 ns/byte, 1716.000 ns/call
average length 1000000 -> avg time: 0.172 ns/byte, 172000.000 ns/call





share|improve this answer















Testing your code on Godbolt's Compiler Explorer provides this explanation:



  • at -O0 or without optimisations, the generated code call the C library function strlen

  • at -O1 the generated code uses a simple inline expansion using a rep scasb instruction.

  • at -O2 and above, the generated code uses a more elaborate inline expansion.

Benchmarking your code repeatedly shows a substantial variation from one run to another, but increasing the number of iterations shows that:



  • the -O1 code is much slower than the C library implementation: 32240 vs 3090

  • the -O2 code is faster than the -O1 but still substantially slower than the C ibrary code: 8570 vs 3090.

This behavior is specific to gcc and the glibc. The same test on OS/X with clang and Apple's Libc does not show a significant difference, which is not a surprise as Godbolt shows that clang generates a call to the C library strlen at all optimisation levels.



This could be considered a bug in gcc/glibc but more extensive benchmarking might show that the overhead of calling strlen has a more important impact than the lack of performance of the inline code for small strings. The strings on which you benchmark are uncommonly large, so focusing the benchmark on ultra-long strings might not give meaningful results.



I updated the benchmark for smaller strings and it shows similar performance for string lengths varying from 0 to 100 at -O0 and -O2 but still a much worse performance at -O1, 3 times slower.



Here is the updated code:



#include <stdlib.h>
#include <string.h>
#include <time.h>

void benchmark(int repeat, int minlen, int maxlen)
char *s = malloc(maxlen + 1);
memset(s, 'A', minlen);
long long bytes = 0, calls = 0;
clock_t clk = clock();
for (int n = 0; n < repeat; n++)
for (int i = minlen; i < maxlen; ++i)
bytes += i + 1;
calls += 1;
s[i] = '';
s[strlen(s)] = 'A';


clk = clock() - clk;
free(s);
double avglen = (minlen + maxlen - 1) / 2.0;
double ns = (double)clk * 1e9 / CLOCKS_PER_SEC;
printf("average length %7.0f -> avg time: %7.3f ns/byte, %7.3f ns/calln",
avglen, ns / bytes, ns / calls);


int main()
benchmark(10000000, 0, 1);
benchmark(1000000, 0, 10);
benchmark(1000000, 5, 15);
benchmark(100000, 0, 100);
benchmark(100000, 50, 150);
benchmark(10000, 0, 1000);
benchmark(10000, 500, 1500);
benchmark(1000, 0, 10000);
benchmark(1000, 5000, 15000);
benchmark(100, 1000000 - 50, 1000000 + 50);
return 0;



Here is the output:




chqrlie> gcc -std=c99 -O0 benchstrlen.c && ./a.out
average length 0 -> avg time: 14.000 ns/byte, 14.000 ns/call
average length 4 -> avg time: 2.364 ns/byte, 13.000 ns/call
average length 10 -> avg time: 1.238 ns/byte, 13.000 ns/call
average length 50 -> avg time: 0.317 ns/byte, 16.000 ns/call
average length 100 -> avg time: 0.169 ns/byte, 17.000 ns/call
average length 500 -> avg time: 0.074 ns/byte, 37.000 ns/call
average length 1000 -> avg time: 0.068 ns/byte, 68.000 ns/call
average length 5000 -> avg time: 0.064 ns/byte, 318.000 ns/call
average length 10000 -> avg time: 0.062 ns/byte, 622.000 ns/call
average length 1000000 -> avg time: 0.062 ns/byte, 62000.000 ns/call
chqrlie> gcc -std=c99 -O1 benchstrlen.c && ./a.out
average length 0 -> avg time: 20.000 ns/byte, 20.000 ns/call
average length 4 -> avg time: 3.818 ns/byte, 21.000 ns/call
average length 10 -> avg time: 2.190 ns/byte, 23.000 ns/call
average length 50 -> avg time: 0.990 ns/byte, 50.000 ns/call
average length 100 -> avg time: 0.816 ns/byte, 82.000 ns/call
average length 500 -> avg time: 0.679 ns/byte, 340.000 ns/call
average length 1000 -> avg time: 0.664 ns/byte, 664.000 ns/call
average length 5000 -> avg time: 0.651 ns/byte, 3254.000 ns/call
average length 10000 -> avg time: 0.649 ns/byte, 6491.000 ns/call
average length 1000000 -> avg time: 0.648 ns/byte, 648000.000 ns/call
chqrlie> gcc -std=c99 -O2 benchstrlen.c && ./a.out
average length 0 -> avg time: 10.000 ns/byte, 10.000 ns/call
average length 4 -> avg time: 2.000 ns/byte, 11.000 ns/call
average length 10 -> avg time: 1.048 ns/byte, 11.000 ns/call
average length 50 -> avg time: 0.337 ns/byte, 17.000 ns/call
average length 100 -> avg time: 0.299 ns/byte, 30.000 ns/call
average length 500 -> avg time: 0.202 ns/byte, 101.000 ns/call
average length 1000 -> avg time: 0.188 ns/byte, 188.000 ns/call
average length 5000 -> avg time: 0.174 ns/byte, 868.000 ns/call
average length 10000 -> avg time: 0.172 ns/byte, 1716.000 ns/call
average length 1000000 -> avg time: 0.172 ns/byte, 172000.000 ns/call






share|improve this answer














share|improve this answer



share|improve this answer








edited 6 hours ago

























answered 7 hours ago









chqrliechqrlie

63k849108




63k849108












  • Wouldn't it still be better for the inlined version to use the same optimizations as the library strlen, giving the best of both worlds?

    – Daniel H
    7 hours ago






  • 1





    It would, but the hand optimized version in the C library might be larger and more complicated to inline. I have not looked into this recently, but there used to be a mix of complex platform specific macros in <string.h> and hard coded optimisations in the gcc code generator. Definitely still room for improvement on intel targets.

    – chqrlie
    7 hours ago











  • Does it change if you use -march=native -mtune=native?

    – Deduplicator
    6 hours ago











  • Note that the GNU C library function for strlen() is likely optimised for extremely large strings (that no sane programmer will care about) at the expense of performance for small strings (that are extremely common); and the optimisations done by the library version should never be done. The problem here is that the OP's code doesn't keep track of the string's length itself (e.g. with an int len; variable) and should not have used strlen() at all, making the code so bad for performance that "optimised for something no sane programmer would care about" actually helped.

    – Brendan
    6 hours ago







  • 1





    @chqrlie: I'd also say that this is partly a symptom of a larger problem - code in libraries can't be optimised for any specific case and therefore must be "un-optimal" for some cases. To work around that it would've been nice if there was a strlen_small() and a separate strlen_large(), but there isn't.

    – Brendan
    4 hours ago

















  • Wouldn't it still be better for the inlined version to use the same optimizations as the library strlen, giving the best of both worlds?

    – Daniel H
    7 hours ago






  • 1





    It would, but the hand optimized version in the C library might be larger and more complicated to inline. I have not looked into this recently, but there used to be a mix of complex platform specific macros in <string.h> and hard coded optimisations in the gcc code generator. Definitely still room for improvement on intel targets.

    – chqrlie
    7 hours ago











  • Does it change if you use -march=native -mtune=native?

    – Deduplicator
    6 hours ago











  • Note that the GNU C library function for strlen() is likely optimised for extremely large strings (that no sane programmer will care about) at the expense of performance for small strings (that are extremely common); and the optimisations done by the library version should never be done. The problem here is that the OP's code doesn't keep track of the string's length itself (e.g. with an int len; variable) and should not have used strlen() at all, making the code so bad for performance that "optimised for something no sane programmer would care about" actually helped.

    – Brendan
    6 hours ago







  • 1





    @chqrlie: I'd also say that this is partly a symptom of a larger problem - code in libraries can't be optimised for any specific case and therefore must be "un-optimal" for some cases. To work around that it would've been nice if there was a strlen_small() and a separate strlen_large(), but there isn't.

    – Brendan
    4 hours ago
















Wouldn't it still be better for the inlined version to use the same optimizations as the library strlen, giving the best of both worlds?

– Daniel H
7 hours ago





Wouldn't it still be better for the inlined version to use the same optimizations as the library strlen, giving the best of both worlds?

– Daniel H
7 hours ago




1




1





It would, but the hand optimized version in the C library might be larger and more complicated to inline. I have not looked into this recently, but there used to be a mix of complex platform specific macros in <string.h> and hard coded optimisations in the gcc code generator. Definitely still room for improvement on intel targets.

– chqrlie
7 hours ago





It would, but the hand optimized version in the C library might be larger and more complicated to inline. I have not looked into this recently, but there used to be a mix of complex platform specific macros in <string.h> and hard coded optimisations in the gcc code generator. Definitely still room for improvement on intel targets.

– chqrlie
7 hours ago













Does it change if you use -march=native -mtune=native?

– Deduplicator
6 hours ago





Does it change if you use -march=native -mtune=native?

– Deduplicator
6 hours ago













Note that the GNU C library function for strlen() is likely optimised for extremely large strings (that no sane programmer will care about) at the expense of performance for small strings (that are extremely common); and the optimisations done by the library version should never be done. The problem here is that the OP's code doesn't keep track of the string's length itself (e.g. with an int len; variable) and should not have used strlen() at all, making the code so bad for performance that "optimised for something no sane programmer would care about" actually helped.

– Brendan
6 hours ago






Note that the GNU C library function for strlen() is likely optimised for extremely large strings (that no sane programmer will care about) at the expense of performance for small strings (that are extremely common); and the optimisations done by the library version should never be done. The problem here is that the OP's code doesn't keep track of the string's length itself (e.g. with an int len; variable) and should not have used strlen() at all, making the code so bad for performance that "optimised for something no sane programmer would care about" actually helped.

– Brendan
6 hours ago





1




1





@chqrlie: I'd also say that this is partly a symptom of a larger problem - code in libraries can't be optimised for any specific case and therefore must be "un-optimal" for some cases. To work around that it would've been nice if there was a strlen_small() and a separate strlen_large(), but there isn't.

– Brendan
4 hours ago





@chqrlie: I'd also say that this is partly a symptom of a larger problem - code in libraries can't be optimised for any specific case and therefore must be "un-optimal" for some cases. To work around that it would've been nice if there was a strlen_small() and a separate strlen_large(), but there isn't.

– Brendan
4 hours ago



















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55563598%2fwhy-is-this-code-6-5x-slower-with-optimizations-enabled%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Can not update quote_id field of “quote_item” table magento 2Magento 2.1 - We can't remove the item. (Shopping Cart doesnt allow us to remove items before becomes empty)Add value for custom quote item attribute using REST apiREST API endpoint v1/carts/cartId/items always returns error messageCorrect way to save entries to databaseHow to remove all associated quote objects of a customer completelyMagento 2 - Save value from custom input field to quote_itemGet quote_item data using quote id and product id filter in Magento 2How to set additional data to quote_item table from controller in Magento 2?What is the purpose of additional_data column in quote_item table in magento2Set Custom Price to Quote item magento2 from controller

How to solve knockout JS error in Magento 2 Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?(Magento2) knockout.js:3012 Uncaught ReferenceError: Unable to process bindingUnable to process binding Knockout.js magento 2Cannot read property `scopeLabel` of undefined on Product Detail PageCan't get Customer Data on frontend in Magento 2Magento2 Order Summary - unable to process bindingKO templates are not loading in Magento 2.1 applicationgetting knockout js error magento 2Product grid not load -— Unable to process binding Knockout.js magento 2Product form not loaded in magento2Uncaught ReferenceError: Unable to process binding “if: function()return (isShowLegend()) ” magento 2

Nissan Patrol Зміст Перше покоління — 4W60 (1951-1960) | Друге покоління — 60 series (1960-1980) | Третє покоління (1980–2002) | Четверте покоління — Y60 (1987–1998) | П'яте покоління — Y61 (1997–2013) | Шосте покоління — Y62 (2010- ) | Посилання | Зноски | Навігаційне менюОфіційний український сайтТест-драйв Nissan Patrol 2010 7-го поколінняNissan PatrolКак мы тестировали Nissan Patrol 2016рвиправивши або дописавши її