Why is my p-value correlated to difference between means in two sample tests? The 2019 Stack Overflow Developer Survey Results Are InIs it possible to use a two sample $t$ test here?Mann-Whitney null hypothesis under unequal varianceDoes statistically insignificant difference of means imply equality of means?Evaluating close calls with the Wilcon Sum Rank test two sided vs. one sidedTest for systematic difference between two samplesHow to adjust p-value to reject null hypothesis from sample size in Mann Whitney U test?In distribution tests, why do we assume that any distribution is true unless proven otherwise?Calculating the p-value of two independent counts?Mann–Whitney U test shows there is a difference between two sample sets, how do I know which sample set is better?Two sample t-test to show equality of the two means

Output the Arecibo Message

Should I use my personal or workplace e-mail when registering to external websites for work purpose?

Why do UK politicians seemingly ignore opinion polls on Brexit?

"What time...?" or "At what time...?" - what is more grammatically correct?

Which Sci-Fi work first showed weapon of galactic-scale mass destruction?

Geography at the pixel level

How to change the limits of integration

What is the use of option -o in the useradd command?

Carnot-Caratheodory metric

Inflated grade on resume at previous job, might former employer tell new employer?

Is domain driven design an anti-SQL pattern?

Is flight data recorder erased after every flight?

Are there any other methods to apply to solving simultaneous equations?

Why can Shazam do this?

Falsification in Math vs Science

Landlord wants to switch my lease to a "Land contract" to "get back at the city"

Inversion Puzzle

Extreme, unacceptable situation and I can't attend work tomorrow morning

What is the motivation for a law requiring 2 parties to consent for recording a conversation

How are circuits which use complex ICs normally simulated?

How long do I have to send payment?

What do hard-Brexiteers want with respect to the Irish border?

What could be the right powersource for 15 seconds lifespan disposable giant chainsaw?

Monty Hall variation



Why is my p-value correlated to difference between means in two sample tests?



The 2019 Stack Overflow Developer Survey Results Are InIs it possible to use a two sample $t$ test here?Mann-Whitney null hypothesis under unequal varianceDoes statistically insignificant difference of means imply equality of means?Evaluating close calls with the Wilcon Sum Rank test two sided vs. one sidedTest for systematic difference between two samplesHow to adjust p-value to reject null hypothesis from sample size in Mann Whitney U test?In distribution tests, why do we assume that any distribution is true unless proven otherwise?Calculating the p-value of two independent counts?Mann–Whitney U test shows there is a difference between two sample sets, how do I know which sample set is better?Two sample t-test to show equality of the two means



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








2












$begingroup$


A colleague has recently made the claim that a large p-value was not more support for the null hypothesis than a low one. Of course, this is also what I learned (uniform distribution under the null hypothesis, we can only reject the null hypothesis...). But when I simulate two random normal distributions (100 samples in each group) in R, my p-value is correlated to the difference (averaged over 30 repetitions) between the two means (with for example a T test or a Mann & Whitney test).



Why is my p-value, above the threshold of 0.05, correlated to the difference between the means of my two groups?



enter image description here



With 1000 repetitions for each x (difference between means/2) value.
enter image description here



My R code in case this is just a silly mistake.



pvaluetot<-NULL
xtot<-NULL
seqx<-seq(0,5,0.01)
for (x in seqx)
ptemp<-NULL
pmean<-NULL
a<-0

repeat
a<-a+1
pop1<-rnorm(100,0+x,2)
pop2<-rnorm(100,0-x,2)
pvalue<-t.test(pop1,pop2)$p.value

ptemp<-c(ptemp,pvalue)
#print(ptemp)
if (a==30)
break


pmean<-mean(ptemp)
pvaluetot<-c(pvaluetot,pmean)
xtot<-c(xtot,x)
print(x)


pvaluetot
xtot
plot(pvaluetot,xtot)









share|cite|improve this question











$endgroup$


















    2












    $begingroup$


    A colleague has recently made the claim that a large p-value was not more support for the null hypothesis than a low one. Of course, this is also what I learned (uniform distribution under the null hypothesis, we can only reject the null hypothesis...). But when I simulate two random normal distributions (100 samples in each group) in R, my p-value is correlated to the difference (averaged over 30 repetitions) between the two means (with for example a T test or a Mann & Whitney test).



    Why is my p-value, above the threshold of 0.05, correlated to the difference between the means of my two groups?



    enter image description here



    With 1000 repetitions for each x (difference between means/2) value.
    enter image description here



    My R code in case this is just a silly mistake.



    pvaluetot<-NULL
    xtot<-NULL
    seqx<-seq(0,5,0.01)
    for (x in seqx)
    ptemp<-NULL
    pmean<-NULL
    a<-0

    repeat
    a<-a+1
    pop1<-rnorm(100,0+x,2)
    pop2<-rnorm(100,0-x,2)
    pvalue<-t.test(pop1,pop2)$p.value

    ptemp<-c(ptemp,pvalue)
    #print(ptemp)
    if (a==30)
    break


    pmean<-mean(ptemp)
    pvaluetot<-c(pvaluetot,pmean)
    xtot<-c(xtot,x)
    print(x)


    pvaluetot
    xtot
    plot(pvaluetot,xtot)









    share|cite|improve this question











    $endgroup$














      2












      2








      2





      $begingroup$


      A colleague has recently made the claim that a large p-value was not more support for the null hypothesis than a low one. Of course, this is also what I learned (uniform distribution under the null hypothesis, we can only reject the null hypothesis...). But when I simulate two random normal distributions (100 samples in each group) in R, my p-value is correlated to the difference (averaged over 30 repetitions) between the two means (with for example a T test or a Mann & Whitney test).



      Why is my p-value, above the threshold of 0.05, correlated to the difference between the means of my two groups?



      enter image description here



      With 1000 repetitions for each x (difference between means/2) value.
      enter image description here



      My R code in case this is just a silly mistake.



      pvaluetot<-NULL
      xtot<-NULL
      seqx<-seq(0,5,0.01)
      for (x in seqx)
      ptemp<-NULL
      pmean<-NULL
      a<-0

      repeat
      a<-a+1
      pop1<-rnorm(100,0+x,2)
      pop2<-rnorm(100,0-x,2)
      pvalue<-t.test(pop1,pop2)$p.value

      ptemp<-c(ptemp,pvalue)
      #print(ptemp)
      if (a==30)
      break


      pmean<-mean(ptemp)
      pvaluetot<-c(pvaluetot,pmean)
      xtot<-c(xtot,x)
      print(x)


      pvaluetot
      xtot
      plot(pvaluetot,xtot)









      share|cite|improve this question











      $endgroup$




      A colleague has recently made the claim that a large p-value was not more support for the null hypothesis than a low one. Of course, this is also what I learned (uniform distribution under the null hypothesis, we can only reject the null hypothesis...). But when I simulate two random normal distributions (100 samples in each group) in R, my p-value is correlated to the difference (averaged over 30 repetitions) between the two means (with for example a T test or a Mann & Whitney test).



      Why is my p-value, above the threshold of 0.05, correlated to the difference between the means of my two groups?



      enter image description here



      With 1000 repetitions for each x (difference between means/2) value.
      enter image description here



      My R code in case this is just a silly mistake.



      pvaluetot<-NULL
      xtot<-NULL
      seqx<-seq(0,5,0.01)
      for (x in seqx)
      ptemp<-NULL
      pmean<-NULL
      a<-0

      repeat
      a<-a+1
      pop1<-rnorm(100,0+x,2)
      pop2<-rnorm(100,0-x,2)
      pvalue<-t.test(pop1,pop2)$p.value

      ptemp<-c(ptemp,pvalue)
      #print(ptemp)
      if (a==30)
      break


      pmean<-mean(ptemp)
      pvaluetot<-c(pvaluetot,pmean)
      xtot<-c(xtot,x)
      print(x)


      pvaluetot
      xtot
      plot(pvaluetot,xtot)






      hypothesis-testing statistical-significance p-value effect-size






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited 2 hours ago







      Nakx

















      asked 3 hours ago









      NakxNakx

      324115




      324115




















          2 Answers
          2






          active

          oldest

          votes


















          2












          $begingroup$

          Why would you expect anything else? You don't need a simulation to know this is going to happen. Look at the formula for the t-statistic:
          $t = fracbarx_1 - barx_2 sqrt fracs^2_1n_1 + fracs^2_2n_2 $



          Obviously if you increase the true difference of means you expect $barx_1 - barx_2$ will be larger. You are holding the variance and sample size constant, so the t-statistic must be larger and thus the p-value smaller.



          I think you are confusing a philosophical rule about hypothesis testing with a mathematical fact. If the null hypothesis is true, you would expect a higher p-value. This has to be true in order for hypothesis testing to make any sense.






          share|cite|improve this answer









          $endgroup$




















            0












            $begingroup$

            As you said, the p-value is uniformly distributed under the null hypothesis. That is, if the null hypothesis is really true, then upon repeated experiments we expect to find a fully random, flat distribution of p-values between [0, 1]. Consequently, a frequentist p-value says nothing about how likely the null hypothesis is to be true, since any p-value is equally probable under the null.



            What you're looking at is the distribution of p-values under an alternative hypothesis. Depending on the formulation of this hypothesis, the resulting p-values can have any non-Uniform, positively skewed distribution between [0, 1]. But this doesn't tell you anything about the probability of the null. The reason is that the p-value expresses the probability of the evidence under the null hypothesis, i.e. $p(D|H_0)$, whereas you want to know $p(H_0|D)$. These two are related by Bayes' rule:
            $$
            p(H_0|D) = fracH_0)p(H_0)neg H_0)p(neg H_0)
            $$

            This means that in order to calculate the probability you're interested in, you need to know and take into account the prior probability of the null being true ($p(H_0)$), the prior probability of the null being false ($p(neg H_0)$) and the probability of the data given that the null is false ($p(D|neg H_0)$). This is the purview of Bayesian, rather than frequentist statistics.



            As for the correlation you observed: as I said above the p-values will be positively skewed under the alternative hypothesis. How skewed depends what that alternative hypothesis is. In the case of a two-sample t-test, the more you increase the difference between your population means, the more skewed the p-values will become. This reflects the fact that you're making your samples increasingly more different from what is plausible under the null, and so by definition the resulting p-values (reflecting the probability of the data under the null) must decrease.






            share|cite|improve this answer









            $endgroup$













              Your Answer





              StackExchange.ifUsing("editor", function ()
              return StackExchange.using("mathjaxEditing", function ()
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              );
              );
              , "mathjax-editing");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "65"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f402138%2fwhy-is-my-p-value-correlated-to-difference-between-means-in-two-sample-tests%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              2












              $begingroup$

              Why would you expect anything else? You don't need a simulation to know this is going to happen. Look at the formula for the t-statistic:
              $t = fracbarx_1 - barx_2 sqrt fracs^2_1n_1 + fracs^2_2n_2 $



              Obviously if you increase the true difference of means you expect $barx_1 - barx_2$ will be larger. You are holding the variance and sample size constant, so the t-statistic must be larger and thus the p-value smaller.



              I think you are confusing a philosophical rule about hypothesis testing with a mathematical fact. If the null hypothesis is true, you would expect a higher p-value. This has to be true in order for hypothesis testing to make any sense.






              share|cite|improve this answer









              $endgroup$

















                2












                $begingroup$

                Why would you expect anything else? You don't need a simulation to know this is going to happen. Look at the formula for the t-statistic:
                $t = fracbarx_1 - barx_2 sqrt fracs^2_1n_1 + fracs^2_2n_2 $



                Obviously if you increase the true difference of means you expect $barx_1 - barx_2$ will be larger. You are holding the variance and sample size constant, so the t-statistic must be larger and thus the p-value smaller.



                I think you are confusing a philosophical rule about hypothesis testing with a mathematical fact. If the null hypothesis is true, you would expect a higher p-value. This has to be true in order for hypothesis testing to make any sense.






                share|cite|improve this answer









                $endgroup$















                  2












                  2








                  2





                  $begingroup$

                  Why would you expect anything else? You don't need a simulation to know this is going to happen. Look at the formula for the t-statistic:
                  $t = fracbarx_1 - barx_2 sqrt fracs^2_1n_1 + fracs^2_2n_2 $



                  Obviously if you increase the true difference of means you expect $barx_1 - barx_2$ will be larger. You are holding the variance and sample size constant, so the t-statistic must be larger and thus the p-value smaller.



                  I think you are confusing a philosophical rule about hypothesis testing with a mathematical fact. If the null hypothesis is true, you would expect a higher p-value. This has to be true in order for hypothesis testing to make any sense.






                  share|cite|improve this answer









                  $endgroup$



                  Why would you expect anything else? You don't need a simulation to know this is going to happen. Look at the formula for the t-statistic:
                  $t = fracbarx_1 - barx_2 sqrt fracs^2_1n_1 + fracs^2_2n_2 $



                  Obviously if you increase the true difference of means you expect $barx_1 - barx_2$ will be larger. You are holding the variance and sample size constant, so the t-statistic must be larger and thus the p-value smaller.



                  I think you are confusing a philosophical rule about hypothesis testing with a mathematical fact. If the null hypothesis is true, you would expect a higher p-value. This has to be true in order for hypothesis testing to make any sense.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered 1 hour ago









                  Matt PMatt P

                  1163




                  1163























                      0












                      $begingroup$

                      As you said, the p-value is uniformly distributed under the null hypothesis. That is, if the null hypothesis is really true, then upon repeated experiments we expect to find a fully random, flat distribution of p-values between [0, 1]. Consequently, a frequentist p-value says nothing about how likely the null hypothesis is to be true, since any p-value is equally probable under the null.



                      What you're looking at is the distribution of p-values under an alternative hypothesis. Depending on the formulation of this hypothesis, the resulting p-values can have any non-Uniform, positively skewed distribution between [0, 1]. But this doesn't tell you anything about the probability of the null. The reason is that the p-value expresses the probability of the evidence under the null hypothesis, i.e. $p(D|H_0)$, whereas you want to know $p(H_0|D)$. These two are related by Bayes' rule:
                      $$
                      p(H_0|D) = fracH_0)p(H_0)neg H_0)p(neg H_0)
                      $$

                      This means that in order to calculate the probability you're interested in, you need to know and take into account the prior probability of the null being true ($p(H_0)$), the prior probability of the null being false ($p(neg H_0)$) and the probability of the data given that the null is false ($p(D|neg H_0)$). This is the purview of Bayesian, rather than frequentist statistics.



                      As for the correlation you observed: as I said above the p-values will be positively skewed under the alternative hypothesis. How skewed depends what that alternative hypothesis is. In the case of a two-sample t-test, the more you increase the difference between your population means, the more skewed the p-values will become. This reflects the fact that you're making your samples increasingly more different from what is plausible under the null, and so by definition the resulting p-values (reflecting the probability of the data under the null) must decrease.






                      share|cite|improve this answer









                      $endgroup$

















                        0












                        $begingroup$

                        As you said, the p-value is uniformly distributed under the null hypothesis. That is, if the null hypothesis is really true, then upon repeated experiments we expect to find a fully random, flat distribution of p-values between [0, 1]. Consequently, a frequentist p-value says nothing about how likely the null hypothesis is to be true, since any p-value is equally probable under the null.



                        What you're looking at is the distribution of p-values under an alternative hypothesis. Depending on the formulation of this hypothesis, the resulting p-values can have any non-Uniform, positively skewed distribution between [0, 1]. But this doesn't tell you anything about the probability of the null. The reason is that the p-value expresses the probability of the evidence under the null hypothesis, i.e. $p(D|H_0)$, whereas you want to know $p(H_0|D)$. These two are related by Bayes' rule:
                        $$
                        p(H_0|D) = fracH_0)p(H_0)neg H_0)p(neg H_0)
                        $$

                        This means that in order to calculate the probability you're interested in, you need to know and take into account the prior probability of the null being true ($p(H_0)$), the prior probability of the null being false ($p(neg H_0)$) and the probability of the data given that the null is false ($p(D|neg H_0)$). This is the purview of Bayesian, rather than frequentist statistics.



                        As for the correlation you observed: as I said above the p-values will be positively skewed under the alternative hypothesis. How skewed depends what that alternative hypothesis is. In the case of a two-sample t-test, the more you increase the difference between your population means, the more skewed the p-values will become. This reflects the fact that you're making your samples increasingly more different from what is plausible under the null, and so by definition the resulting p-values (reflecting the probability of the data under the null) must decrease.






                        share|cite|improve this answer









                        $endgroup$















                          0












                          0








                          0





                          $begingroup$

                          As you said, the p-value is uniformly distributed under the null hypothesis. That is, if the null hypothesis is really true, then upon repeated experiments we expect to find a fully random, flat distribution of p-values between [0, 1]. Consequently, a frequentist p-value says nothing about how likely the null hypothesis is to be true, since any p-value is equally probable under the null.



                          What you're looking at is the distribution of p-values under an alternative hypothesis. Depending on the formulation of this hypothesis, the resulting p-values can have any non-Uniform, positively skewed distribution between [0, 1]. But this doesn't tell you anything about the probability of the null. The reason is that the p-value expresses the probability of the evidence under the null hypothesis, i.e. $p(D|H_0)$, whereas you want to know $p(H_0|D)$. These two are related by Bayes' rule:
                          $$
                          p(H_0|D) = fracH_0)p(H_0)neg H_0)p(neg H_0)
                          $$

                          This means that in order to calculate the probability you're interested in, you need to know and take into account the prior probability of the null being true ($p(H_0)$), the prior probability of the null being false ($p(neg H_0)$) and the probability of the data given that the null is false ($p(D|neg H_0)$). This is the purview of Bayesian, rather than frequentist statistics.



                          As for the correlation you observed: as I said above the p-values will be positively skewed under the alternative hypothesis. How skewed depends what that alternative hypothesis is. In the case of a two-sample t-test, the more you increase the difference between your population means, the more skewed the p-values will become. This reflects the fact that you're making your samples increasingly more different from what is plausible under the null, and so by definition the resulting p-values (reflecting the probability of the data under the null) must decrease.






                          share|cite|improve this answer









                          $endgroup$



                          As you said, the p-value is uniformly distributed under the null hypothesis. That is, if the null hypothesis is really true, then upon repeated experiments we expect to find a fully random, flat distribution of p-values between [0, 1]. Consequently, a frequentist p-value says nothing about how likely the null hypothesis is to be true, since any p-value is equally probable under the null.



                          What you're looking at is the distribution of p-values under an alternative hypothesis. Depending on the formulation of this hypothesis, the resulting p-values can have any non-Uniform, positively skewed distribution between [0, 1]. But this doesn't tell you anything about the probability of the null. The reason is that the p-value expresses the probability of the evidence under the null hypothesis, i.e. $p(D|H_0)$, whereas you want to know $p(H_0|D)$. These two are related by Bayes' rule:
                          $$
                          p(H_0|D) = fracH_0)p(H_0)neg H_0)p(neg H_0)
                          $$

                          This means that in order to calculate the probability you're interested in, you need to know and take into account the prior probability of the null being true ($p(H_0)$), the prior probability of the null being false ($p(neg H_0)$) and the probability of the data given that the null is false ($p(D|neg H_0)$). This is the purview of Bayesian, rather than frequentist statistics.



                          As for the correlation you observed: as I said above the p-values will be positively skewed under the alternative hypothesis. How skewed depends what that alternative hypothesis is. In the case of a two-sample t-test, the more you increase the difference between your population means, the more skewed the p-values will become. This reflects the fact that you're making your samples increasingly more different from what is plausible under the null, and so by definition the resulting p-values (reflecting the probability of the data under the null) must decrease.







                          share|cite|improve this answer












                          share|cite|improve this answer



                          share|cite|improve this answer










                          answered 1 hour ago









                          Ruben van BergenRuben van Bergen

                          4,0391924




                          4,0391924



























                              draft saved

                              draft discarded
















































                              Thanks for contributing an answer to Cross Validated!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f402138%2fwhy-is-my-p-value-correlated-to-difference-between-means-in-two-sample-tests%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Magento 2 duplicate PHPSESSID cookie when using session_start() in custom php scriptMagento 2: User cant logged in into to account page, no error showing!Magento duplicate on subdomainGrabbing storeview from cookie (after using language selector)How do I run php custom script on magento2Magento 2: Include PHP script in headerSession lock after using Cm_RedisSessionscript php to update stockMagento set cookie popupMagento 2 session id cookie - where to find it?How to import Configurable product from csv with custom attributes using php scriptMagento 2 run custom PHP script

                              Can not update quote_id field of “quote_item” table magento 2Magento 2.1 - We can't remove the item. (Shopping Cart doesnt allow us to remove items before becomes empty)Add value for custom quote item attribute using REST apiREST API endpoint v1/carts/cartId/items always returns error messageCorrect way to save entries to databaseHow to remove all associated quote objects of a customer completelyMagento 2 - Save value from custom input field to quote_itemGet quote_item data using quote id and product id filter in Magento 2How to set additional data to quote_item table from controller in Magento 2?What is the purpose of additional_data column in quote_item table in magento2Set Custom Price to Quote item magento2 from controller

                              How to solve knockout JS error in Magento 2 Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?(Magento2) knockout.js:3012 Uncaught ReferenceError: Unable to process bindingUnable to process binding Knockout.js magento 2Cannot read property `scopeLabel` of undefined on Product Detail PageCan't get Customer Data on frontend in Magento 2Magento2 Order Summary - unable to process bindingKO templates are not loading in Magento 2.1 applicationgetting knockout js error magento 2Product grid not load -— Unable to process binding Knockout.js magento 2Product form not loaded in magento2Uncaught ReferenceError: Unable to process binding “if: function()return (isShowLegend()) ” magento 2