How to explain the utility of binomial logistic regression when the predictors are purely categorical The Next CEO of Stack OverflowLogistic regression with only categorical predictorsHow do I interpret logistic regression output for categorical variables when two categories are missing?Logistic regression power analysis with moderation between categorical and continuous variableChecking the proportional odds assumption holds in an ordinal logistic regression using polr functionLogistic regression with categorical predictorsImplementing logistic regression (R)Logistic regression with only categorical predictorsLogistic regression with multi-level categorical predictorsBinary logistic regression with compositional proportional predictorsBinomial logistic regression with categorical predictors and interaction (binomial family argument and p-value differences)multiple logistic regressions with binary predictors vs single logistic regression with categorical predictors

What was Carter Burke's job for "the company" in Aliens?

Redefining symbol midway through a document

How to Implement Deterministic Encryption Safely in .NET

What steps are necessary to read a Modern SSD in Medieval Europe?

Can someone explain this formula for calculating Manhattan distance?

Why the last AS PATH item always is `I` or `?`?

what's the use of '% to gdp' type of variables?

What CSS properties can the br tag have?

Yu-Gi-Oh cards in Python 3

Traduction de « Life is a roller coaster »

Players Circumventing the limitations of Wish

How do I fit a non linear curve?

When "be it" is at the beginning of a sentence, what kind of structure do you call it?

Do I need to write [sic] when including a quotation with a number less than 10 that isn't written out?

Can this note be analyzed as a non-chord tone?

Film where the government was corrupt with aliens, people sent to kill aliens are given rigged visors not showing the right aliens

In the "Harry Potter and the Order of the Phoenix" video game, what potion is used to sabotage Umbridge's speakers?

Is it ok to trim down a tube patch?

How to get the last not-null value in an ordered column of a huge table?

Is dried pee considered dirt?

Would a grinding machine be a simple and workable propulsion system for an interplanetary spacecraft?

Computationally populating tables with probability data

Decide between Polyglossia and Babel for LuaLaTeX in 2019

Towers in the ocean; How deep can they be built?



How to explain the utility of binomial logistic regression when the predictors are purely categorical



The Next CEO of Stack OverflowLogistic regression with only categorical predictorsHow do I interpret logistic regression output for categorical variables when two categories are missing?Logistic regression power analysis with moderation between categorical and continuous variableChecking the proportional odds assumption holds in an ordinal logistic regression using polr functionLogistic regression with categorical predictorsImplementing logistic regression (R)Logistic regression with only categorical predictorsLogistic regression with multi-level categorical predictorsBinary logistic regression with compositional proportional predictorsBinomial logistic regression with categorical predictors and interaction (binomial family argument and p-value differences)multiple logistic regressions with binary predictors vs single logistic regression with categorical predictors










1












$begingroup$


The resources that I have seen feature graphs such as the following



enter image description here



This is fine if the predictor $x$ is continuous, but if the predictor is categorical and just has a few levels it's not clear to me how to justify the logistic model / curve.



I have seen this post, this is not a question about whether or not binary logistic regression can be carried out using categorical predictors.



What I'm interested in is how to explain the use of the logistic curve in this model, as there doesn't seem to be a clear way like there is for a continuous predictor.



edit



data



data that has been used for this simulation



library(vcd)
set.seed(2019)
n = 1000
y = rbinom(2*n, 1, 0.6)
x = rbinom(2*n, 1, 0.6)


crosstabulation



> table(df)
y
x 0 1
0 293 523
1 461 723


> prop.table(table(df))
y
x 0 1
0 0.1465 0.2615
1 0.2305 0.3615


mosaic plot



enter image description here










share|cite|improve this question











$endgroup$
















    1












    $begingroup$


    The resources that I have seen feature graphs such as the following



    enter image description here



    This is fine if the predictor $x$ is continuous, but if the predictor is categorical and just has a few levels it's not clear to me how to justify the logistic model / curve.



    I have seen this post, this is not a question about whether or not binary logistic regression can be carried out using categorical predictors.



    What I'm interested in is how to explain the use of the logistic curve in this model, as there doesn't seem to be a clear way like there is for a continuous predictor.



    edit



    data



    data that has been used for this simulation



    library(vcd)
    set.seed(2019)
    n = 1000
    y = rbinom(2*n, 1, 0.6)
    x = rbinom(2*n, 1, 0.6)


    crosstabulation



    > table(df)
    y
    x 0 1
    0 293 523
    1 461 723


    > prop.table(table(df))
    y
    x 0 1
    0 0.1465 0.2615
    1 0.2305 0.3615


    mosaic plot



    enter image description here










    share|cite|improve this question











    $endgroup$














      1












      1








      1





      $begingroup$


      The resources that I have seen feature graphs such as the following



      enter image description here



      This is fine if the predictor $x$ is continuous, but if the predictor is categorical and just has a few levels it's not clear to me how to justify the logistic model / curve.



      I have seen this post, this is not a question about whether or not binary logistic regression can be carried out using categorical predictors.



      What I'm interested in is how to explain the use of the logistic curve in this model, as there doesn't seem to be a clear way like there is for a continuous predictor.



      edit



      data



      data that has been used for this simulation



      library(vcd)
      set.seed(2019)
      n = 1000
      y = rbinom(2*n, 1, 0.6)
      x = rbinom(2*n, 1, 0.6)


      crosstabulation



      > table(df)
      y
      x 0 1
      0 293 523
      1 461 723


      > prop.table(table(df))
      y
      x 0 1
      0 0.1465 0.2615
      1 0.2305 0.3615


      mosaic plot



      enter image description here










      share|cite|improve this question











      $endgroup$




      The resources that I have seen feature graphs such as the following



      enter image description here



      This is fine if the predictor $x$ is continuous, but if the predictor is categorical and just has a few levels it's not clear to me how to justify the logistic model / curve.



      I have seen this post, this is not a question about whether or not binary logistic regression can be carried out using categorical predictors.



      What I'm interested in is how to explain the use of the logistic curve in this model, as there doesn't seem to be a clear way like there is for a continuous predictor.



      edit



      data



      data that has been used for this simulation



      library(vcd)
      set.seed(2019)
      n = 1000
      y = rbinom(2*n, 1, 0.6)
      x = rbinom(2*n, 1, 0.6)


      crosstabulation



      > table(df)
      y
      x 0 1
      0 293 523
      1 461 723


      > prop.table(table(df))
      y
      x 0 1
      0 0.1465 0.2615
      1 0.2305 0.3615


      mosaic plot



      enter image description here







      machine-learning logistic binary-data logistic-curve






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited 2 hours ago







      baxx

















      asked 5 hours ago









      baxxbaxx

      275111




      275111




















          2 Answers
          2






          active

          oldest

          votes


















          1












          $begingroup$

          First, you could make a graph like that with a categorical x. It's true that that curve would not make much sense, but ...so? You could say similar things about curves used in evaluating linear regression.



          Second, you can look at crosstabulations, this is especially useful for comparing the DV to a single categorical IV (which is what your plot above does, for a continuous IV). A more graphical way to look at this is a mosaic plot.



          Third, it gets more interesting when you look at multiple IVs. A mosaic plot can handle two IVs pretty easily, but they get messy with more. If there are not a great many variables or levels, you can get the predicted probablity for every combination.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            thanks, please see the edit that I have made to this post. It seems that you're suggesting (please correct me if i'm wrong) that there's not really much to say with respect to the use of the logistic curve in the case of such a set up as this post. Other than it happens to have properties which enable us to map odds -> probabilities (which I'm not suggesting is useless). I was just wondering whether there was a way to demonstrate the utility of the curve in a similar manner to the use when the predictor is continuous.
            $endgroup$
            – baxx
            2 hours ago







          • 1




            $begingroup$
            @baxx: Please also see my answer, in addition to Peter's answer.
            $endgroup$
            – Isabella Ghement
            1 hour ago


















          1












          $begingroup$

          Overview of Binary Logistic Regression Using a Continuous Predictor



          A binary logistic regression model with continuous predictor variable $x$ has the form:



          log(odds that y = 1) = beta0 + beta1 * x (A)


          According to this model, the continuous predictor variable $x$ has a linear effect on the log odds that the binary response variable $y$ is equal to 1 (rather than 0).



          One can easily show this model to be equivalent to the following model:



           (probability that y = 1) =

          exp(beta0 + beta1 * x)/[1 + exp(beta0 + beta1 * x)] (B)


          In the equivalent model, the continuous predictor $x$ has a nonlinear effect on the probability that $y = 1$.



          In the plot that you shared, the S-shaped blue curve is obtained by plotting the right hand side of equation (B) above as a function of $x$ and shows how the probability that $y = 1$ increases (nonlinearly) as the values of $x$ increase.



          BinaryLogistic Regression Using a Categorical Predictor with Two Categories, Whose Effect is Encoded Using a Dummy Variable



          If your x variable were a categorical predictor with, say, $2$ categories, then it would be coded via a dummy variable $x$ in your model, such that x = 0 for the first (or reference) category and $x = 1$ for the second (or non-reference) category. In that case, your binary logistic regression model would still be expressed as in equation (B). However, since $x$ is a dummy variable, the model would be simplified as:



          log(odds that y = 1) = beta0 for the reference category of x (C1) 


          and



          log(odds that y = 1) = beta0 + beta1 for the non-reference category of x (C2)


          The equations (C1) and (C2) can be further manipulated and re-expressed as:



          (probability that y = 1) = exp(beta0)/[1 + exp(beta0)] for the reference category of x (D1)


          and



          (probability that y = 1) =
          exp(beta0 + beta1)/[1 + exp(beta0 + beta1)] for the non-reference category of x (D2)


          What Is the Utility of a Binary Logistic Regression with a Categorical Predictor with Two Categories?



          So what is the utility of the binary logistic regression when $x$ is a dummy variable?



          The model allows you to estimate two different probabilities that $y = 1$: one for $x = 0$ (as per equation (D1)) and one for $x = 1$ (as per equation (D2)).



          You could create a plot to visualize these two probabilities as a function of $x$ and superimpose the observed values of $y$ for $x = 0$ (i.e., a whole bunch of zeroes and ones sitting on top of $x = 0$) and for $x = 1$ (i.e., a whole bunch of zeroes and ones sitting on top of $x = 1$). The plot would look like this:



           ^
          |
          y = 1 | 1 1
          |
          | *
          |
          | *
          |
          y = 0 | 0 0
          |
          |------------------>
          x = 0 x = 1
          x-axis


          In this plot, you can see the zero values (i.e., $y = 0$) stacked atop $x = 0$ and $x = 1$, as well as the one values (i.e., $y = 1$) stacked atop $x = 0$ and $x = 1$. The * symbols denote the estimated values of the probability that $y = 1$.



          There are no more curves in this plot as you are just estimating two distinct probabilities.



          If you wanted to, you could connect these estimated probabilities with a straight line to indicate whether the estimated probability that $y = 1$ increases or decreases when you move from $x = 0$ to $x = 1$. Of course, you could also jitter the zeroes and ones shown in the plot to avoid plotting them right on top of each other.



          What Is the Utility of a Binary Logistic Regression with a Categorical Predictor with More Than Two Categories?



          If your categorical predictor variable $x$ has $k$ categories, where $k > 2$, then your model would include $k - 1$ dummy variables and could be written to make it clear that it estimates $k$ distinct probabilities that $y = 1$ (one for each category of $x$). You could visualize the estimated probabilities by extending the plot shown above to incorporate $k$ categories for $x$. For example, if $k = 3$, the plot would look like this:



           ^
          |
          y = 1| 1 1 1
          | *
          | *
          |
          | *
          |
          y = 0| 0 0 0
          |
          |---------------------------------->
          x = 1st x = 2nd x = 3rd
          x-axis


          where 1st, 2nd and 3rd refer to the first, second and third category of the categorical predictor variable $x$.



          Creating the suggested plots using R and simulated data



          Note that the effects package in R will create plots similar to what I suggested here, except that the plots will NOT show the observed values of $y$ corresponding to each category of $x$ and will display uncertainty intervals (i.e., 95% confidence intervals) around the plotted (estimated) probabilities. Simply use these commands:



          install.packages("effects")
          library(effects)

          model <- glm(y ~ x, data = data, family = "binomial")

          plot(allEffects(model))


          For the data in your post, this would be:



          library(effects)

          set.seed(2019)

          n = 1000

          y = as.factor(rbinom(2*n, 1, 0.6))

          x = as.factor(rbinom(2*n, 1, 0.6))

          df = data.frame(x=x,y=y)

          model <- glm(y ~ x, data = df, family = "binomial")

          plot(allEffects(model),ylim=c(0,1))


          The resulting plot can be seen at https://m.imgur.com/klwame5.






          share|cite|improve this answer











          $endgroup$








          • 1




            $begingroup$
            Thank you! Very well explained - if you don't mind I have made an edit to you post which you may consider here (it's simply formatting, nothing about actual content, perhaps to make things more obvious to future readers) : vpaste.net/iYflt . If possible Isabella I would appreciate a conderation about how the logistic curve ties in here, it feels that the thought of fitting it doesn't apply in the same way. If it makes no sense to try and think about it in these terms then I would be happy to hear that.
            $endgroup$
            – baxx
            1 hour ago










          • $begingroup$
            @baxx: Thank you for your edits - I incorporated some in my post, as suggested. I doubt anyone other than yourself would find my answer useful - if you understood it, that's all that matters. The only "curve" you can speak of is that which connects the estimated probabilities in an effects plot such as the one you suggested. But that "curve" is simply useful as a visual aid to help you judge whether one estimated probability is smaller/larger than other(s).
            $endgroup$
            – Isabella Ghement
            3 mins ago










          • $begingroup$
            Because x is categorical, there is really no "curve" to speak of in technical terms - we don't know what the probability would look like "in between" categories of x and not even if that probability would be defined there.
            $endgroup$
            – Isabella Ghement
            2 mins ago











          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "65"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f400452%2fhow-to-explain-the-utility-of-binomial-logistic-regression-when-the-predictors-a%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1












          $begingroup$

          First, you could make a graph like that with a categorical x. It's true that that curve would not make much sense, but ...so? You could say similar things about curves used in evaluating linear regression.



          Second, you can look at crosstabulations, this is especially useful for comparing the DV to a single categorical IV (which is what your plot above does, for a continuous IV). A more graphical way to look at this is a mosaic plot.



          Third, it gets more interesting when you look at multiple IVs. A mosaic plot can handle two IVs pretty easily, but they get messy with more. If there are not a great many variables or levels, you can get the predicted probablity for every combination.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            thanks, please see the edit that I have made to this post. It seems that you're suggesting (please correct me if i'm wrong) that there's not really much to say with respect to the use of the logistic curve in the case of such a set up as this post. Other than it happens to have properties which enable us to map odds -> probabilities (which I'm not suggesting is useless). I was just wondering whether there was a way to demonstrate the utility of the curve in a similar manner to the use when the predictor is continuous.
            $endgroup$
            – baxx
            2 hours ago







          • 1




            $begingroup$
            @baxx: Please also see my answer, in addition to Peter's answer.
            $endgroup$
            – Isabella Ghement
            1 hour ago















          1












          $begingroup$

          First, you could make a graph like that with a categorical x. It's true that that curve would not make much sense, but ...so? You could say similar things about curves used in evaluating linear regression.



          Second, you can look at crosstabulations, this is especially useful for comparing the DV to a single categorical IV (which is what your plot above does, for a continuous IV). A more graphical way to look at this is a mosaic plot.



          Third, it gets more interesting when you look at multiple IVs. A mosaic plot can handle two IVs pretty easily, but they get messy with more. If there are not a great many variables or levels, you can get the predicted probablity for every combination.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            thanks, please see the edit that I have made to this post. It seems that you're suggesting (please correct me if i'm wrong) that there's not really much to say with respect to the use of the logistic curve in the case of such a set up as this post. Other than it happens to have properties which enable us to map odds -> probabilities (which I'm not suggesting is useless). I was just wondering whether there was a way to demonstrate the utility of the curve in a similar manner to the use when the predictor is continuous.
            $endgroup$
            – baxx
            2 hours ago







          • 1




            $begingroup$
            @baxx: Please also see my answer, in addition to Peter's answer.
            $endgroup$
            – Isabella Ghement
            1 hour ago













          1












          1








          1





          $begingroup$

          First, you could make a graph like that with a categorical x. It's true that that curve would not make much sense, but ...so? You could say similar things about curves used in evaluating linear regression.



          Second, you can look at crosstabulations, this is especially useful for comparing the DV to a single categorical IV (which is what your plot above does, for a continuous IV). A more graphical way to look at this is a mosaic plot.



          Third, it gets more interesting when you look at multiple IVs. A mosaic plot can handle two IVs pretty easily, but they get messy with more. If there are not a great many variables or levels, you can get the predicted probablity for every combination.






          share|cite|improve this answer









          $endgroup$



          First, you could make a graph like that with a categorical x. It's true that that curve would not make much sense, but ...so? You could say similar things about curves used in evaluating linear regression.



          Second, you can look at crosstabulations, this is especially useful for comparing the DV to a single categorical IV (which is what your plot above does, for a continuous IV). A more graphical way to look at this is a mosaic plot.



          Third, it gets more interesting when you look at multiple IVs. A mosaic plot can handle two IVs pretty easily, but they get messy with more. If there are not a great many variables or levels, you can get the predicted probablity for every combination.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered 4 hours ago









          Peter FlomPeter Flom

          76.7k11109214




          76.7k11109214











          • $begingroup$
            thanks, please see the edit that I have made to this post. It seems that you're suggesting (please correct me if i'm wrong) that there's not really much to say with respect to the use of the logistic curve in the case of such a set up as this post. Other than it happens to have properties which enable us to map odds -> probabilities (which I'm not suggesting is useless). I was just wondering whether there was a way to demonstrate the utility of the curve in a similar manner to the use when the predictor is continuous.
            $endgroup$
            – baxx
            2 hours ago







          • 1




            $begingroup$
            @baxx: Please also see my answer, in addition to Peter's answer.
            $endgroup$
            – Isabella Ghement
            1 hour ago
















          • $begingroup$
            thanks, please see the edit that I have made to this post. It seems that you're suggesting (please correct me if i'm wrong) that there's not really much to say with respect to the use of the logistic curve in the case of such a set up as this post. Other than it happens to have properties which enable us to map odds -> probabilities (which I'm not suggesting is useless). I was just wondering whether there was a way to demonstrate the utility of the curve in a similar manner to the use when the predictor is continuous.
            $endgroup$
            – baxx
            2 hours ago







          • 1




            $begingroup$
            @baxx: Please also see my answer, in addition to Peter's answer.
            $endgroup$
            – Isabella Ghement
            1 hour ago















          $begingroup$
          thanks, please see the edit that I have made to this post. It seems that you're suggesting (please correct me if i'm wrong) that there's not really much to say with respect to the use of the logistic curve in the case of such a set up as this post. Other than it happens to have properties which enable us to map odds -> probabilities (which I'm not suggesting is useless). I was just wondering whether there was a way to demonstrate the utility of the curve in a similar manner to the use when the predictor is continuous.
          $endgroup$
          – baxx
          2 hours ago





          $begingroup$
          thanks, please see the edit that I have made to this post. It seems that you're suggesting (please correct me if i'm wrong) that there's not really much to say with respect to the use of the logistic curve in the case of such a set up as this post. Other than it happens to have properties which enable us to map odds -> probabilities (which I'm not suggesting is useless). I was just wondering whether there was a way to demonstrate the utility of the curve in a similar manner to the use when the predictor is continuous.
          $endgroup$
          – baxx
          2 hours ago





          1




          1




          $begingroup$
          @baxx: Please also see my answer, in addition to Peter's answer.
          $endgroup$
          – Isabella Ghement
          1 hour ago




          $begingroup$
          @baxx: Please also see my answer, in addition to Peter's answer.
          $endgroup$
          – Isabella Ghement
          1 hour ago













          1












          $begingroup$

          Overview of Binary Logistic Regression Using a Continuous Predictor



          A binary logistic regression model with continuous predictor variable $x$ has the form:



          log(odds that y = 1) = beta0 + beta1 * x (A)


          According to this model, the continuous predictor variable $x$ has a linear effect on the log odds that the binary response variable $y$ is equal to 1 (rather than 0).



          One can easily show this model to be equivalent to the following model:



           (probability that y = 1) =

          exp(beta0 + beta1 * x)/[1 + exp(beta0 + beta1 * x)] (B)


          In the equivalent model, the continuous predictor $x$ has a nonlinear effect on the probability that $y = 1$.



          In the plot that you shared, the S-shaped blue curve is obtained by plotting the right hand side of equation (B) above as a function of $x$ and shows how the probability that $y = 1$ increases (nonlinearly) as the values of $x$ increase.



          BinaryLogistic Regression Using a Categorical Predictor with Two Categories, Whose Effect is Encoded Using a Dummy Variable



          If your x variable were a categorical predictor with, say, $2$ categories, then it would be coded via a dummy variable $x$ in your model, such that x = 0 for the first (or reference) category and $x = 1$ for the second (or non-reference) category. In that case, your binary logistic regression model would still be expressed as in equation (B). However, since $x$ is a dummy variable, the model would be simplified as:



          log(odds that y = 1) = beta0 for the reference category of x (C1) 


          and



          log(odds that y = 1) = beta0 + beta1 for the non-reference category of x (C2)


          The equations (C1) and (C2) can be further manipulated and re-expressed as:



          (probability that y = 1) = exp(beta0)/[1 + exp(beta0)] for the reference category of x (D1)


          and



          (probability that y = 1) =
          exp(beta0 + beta1)/[1 + exp(beta0 + beta1)] for the non-reference category of x (D2)


          What Is the Utility of a Binary Logistic Regression with a Categorical Predictor with Two Categories?



          So what is the utility of the binary logistic regression when $x$ is a dummy variable?



          The model allows you to estimate two different probabilities that $y = 1$: one for $x = 0$ (as per equation (D1)) and one for $x = 1$ (as per equation (D2)).



          You could create a plot to visualize these two probabilities as a function of $x$ and superimpose the observed values of $y$ for $x = 0$ (i.e., a whole bunch of zeroes and ones sitting on top of $x = 0$) and for $x = 1$ (i.e., a whole bunch of zeroes and ones sitting on top of $x = 1$). The plot would look like this:



           ^
          |
          y = 1 | 1 1
          |
          | *
          |
          | *
          |
          y = 0 | 0 0
          |
          |------------------>
          x = 0 x = 1
          x-axis


          In this plot, you can see the zero values (i.e., $y = 0$) stacked atop $x = 0$ and $x = 1$, as well as the one values (i.e., $y = 1$) stacked atop $x = 0$ and $x = 1$. The * symbols denote the estimated values of the probability that $y = 1$.



          There are no more curves in this plot as you are just estimating two distinct probabilities.



          If you wanted to, you could connect these estimated probabilities with a straight line to indicate whether the estimated probability that $y = 1$ increases or decreases when you move from $x = 0$ to $x = 1$. Of course, you could also jitter the zeroes and ones shown in the plot to avoid plotting them right on top of each other.



          What Is the Utility of a Binary Logistic Regression with a Categorical Predictor with More Than Two Categories?



          If your categorical predictor variable $x$ has $k$ categories, where $k > 2$, then your model would include $k - 1$ dummy variables and could be written to make it clear that it estimates $k$ distinct probabilities that $y = 1$ (one for each category of $x$). You could visualize the estimated probabilities by extending the plot shown above to incorporate $k$ categories for $x$. For example, if $k = 3$, the plot would look like this:



           ^
          |
          y = 1| 1 1 1
          | *
          | *
          |
          | *
          |
          y = 0| 0 0 0
          |
          |---------------------------------->
          x = 1st x = 2nd x = 3rd
          x-axis


          where 1st, 2nd and 3rd refer to the first, second and third category of the categorical predictor variable $x$.



          Creating the suggested plots using R and simulated data



          Note that the effects package in R will create plots similar to what I suggested here, except that the plots will NOT show the observed values of $y$ corresponding to each category of $x$ and will display uncertainty intervals (i.e., 95% confidence intervals) around the plotted (estimated) probabilities. Simply use these commands:



          install.packages("effects")
          library(effects)

          model <- glm(y ~ x, data = data, family = "binomial")

          plot(allEffects(model))


          For the data in your post, this would be:



          library(effects)

          set.seed(2019)

          n = 1000

          y = as.factor(rbinom(2*n, 1, 0.6))

          x = as.factor(rbinom(2*n, 1, 0.6))

          df = data.frame(x=x,y=y)

          model <- glm(y ~ x, data = df, family = "binomial")

          plot(allEffects(model),ylim=c(0,1))


          The resulting plot can be seen at https://m.imgur.com/klwame5.






          share|cite|improve this answer











          $endgroup$








          • 1




            $begingroup$
            Thank you! Very well explained - if you don't mind I have made an edit to you post which you may consider here (it's simply formatting, nothing about actual content, perhaps to make things more obvious to future readers) : vpaste.net/iYflt . If possible Isabella I would appreciate a conderation about how the logistic curve ties in here, it feels that the thought of fitting it doesn't apply in the same way. If it makes no sense to try and think about it in these terms then I would be happy to hear that.
            $endgroup$
            – baxx
            1 hour ago










          • $begingroup$
            @baxx: Thank you for your edits - I incorporated some in my post, as suggested. I doubt anyone other than yourself would find my answer useful - if you understood it, that's all that matters. The only "curve" you can speak of is that which connects the estimated probabilities in an effects plot such as the one you suggested. But that "curve" is simply useful as a visual aid to help you judge whether one estimated probability is smaller/larger than other(s).
            $endgroup$
            – Isabella Ghement
            3 mins ago










          • $begingroup$
            Because x is categorical, there is really no "curve" to speak of in technical terms - we don't know what the probability would look like "in between" categories of x and not even if that probability would be defined there.
            $endgroup$
            – Isabella Ghement
            2 mins ago















          1












          $begingroup$

          Overview of Binary Logistic Regression Using a Continuous Predictor



          A binary logistic regression model with continuous predictor variable $x$ has the form:



          log(odds that y = 1) = beta0 + beta1 * x (A)


          According to this model, the continuous predictor variable $x$ has a linear effect on the log odds that the binary response variable $y$ is equal to 1 (rather than 0).



          One can easily show this model to be equivalent to the following model:



           (probability that y = 1) =

          exp(beta0 + beta1 * x)/[1 + exp(beta0 + beta1 * x)] (B)


          In the equivalent model, the continuous predictor $x$ has a nonlinear effect on the probability that $y = 1$.



          In the plot that you shared, the S-shaped blue curve is obtained by plotting the right hand side of equation (B) above as a function of $x$ and shows how the probability that $y = 1$ increases (nonlinearly) as the values of $x$ increase.



          BinaryLogistic Regression Using a Categorical Predictor with Two Categories, Whose Effect is Encoded Using a Dummy Variable



          If your x variable were a categorical predictor with, say, $2$ categories, then it would be coded via a dummy variable $x$ in your model, such that x = 0 for the first (or reference) category and $x = 1$ for the second (or non-reference) category. In that case, your binary logistic regression model would still be expressed as in equation (B). However, since $x$ is a dummy variable, the model would be simplified as:



          log(odds that y = 1) = beta0 for the reference category of x (C1) 


          and



          log(odds that y = 1) = beta0 + beta1 for the non-reference category of x (C2)


          The equations (C1) and (C2) can be further manipulated and re-expressed as:



          (probability that y = 1) = exp(beta0)/[1 + exp(beta0)] for the reference category of x (D1)


          and



          (probability that y = 1) =
          exp(beta0 + beta1)/[1 + exp(beta0 + beta1)] for the non-reference category of x (D2)


          What Is the Utility of a Binary Logistic Regression with a Categorical Predictor with Two Categories?



          So what is the utility of the binary logistic regression when $x$ is a dummy variable?



          The model allows you to estimate two different probabilities that $y = 1$: one for $x = 0$ (as per equation (D1)) and one for $x = 1$ (as per equation (D2)).



          You could create a plot to visualize these two probabilities as a function of $x$ and superimpose the observed values of $y$ for $x = 0$ (i.e., a whole bunch of zeroes and ones sitting on top of $x = 0$) and for $x = 1$ (i.e., a whole bunch of zeroes and ones sitting on top of $x = 1$). The plot would look like this:



           ^
          |
          y = 1 | 1 1
          |
          | *
          |
          | *
          |
          y = 0 | 0 0
          |
          |------------------>
          x = 0 x = 1
          x-axis


          In this plot, you can see the zero values (i.e., $y = 0$) stacked atop $x = 0$ and $x = 1$, as well as the one values (i.e., $y = 1$) stacked atop $x = 0$ and $x = 1$. The * symbols denote the estimated values of the probability that $y = 1$.



          There are no more curves in this plot as you are just estimating two distinct probabilities.



          If you wanted to, you could connect these estimated probabilities with a straight line to indicate whether the estimated probability that $y = 1$ increases or decreases when you move from $x = 0$ to $x = 1$. Of course, you could also jitter the zeroes and ones shown in the plot to avoid plotting them right on top of each other.



          What Is the Utility of a Binary Logistic Regression with a Categorical Predictor with More Than Two Categories?



          If your categorical predictor variable $x$ has $k$ categories, where $k > 2$, then your model would include $k - 1$ dummy variables and could be written to make it clear that it estimates $k$ distinct probabilities that $y = 1$ (one for each category of $x$). You could visualize the estimated probabilities by extending the plot shown above to incorporate $k$ categories for $x$. For example, if $k = 3$, the plot would look like this:



           ^
          |
          y = 1| 1 1 1
          | *
          | *
          |
          | *
          |
          y = 0| 0 0 0
          |
          |---------------------------------->
          x = 1st x = 2nd x = 3rd
          x-axis


          where 1st, 2nd and 3rd refer to the first, second and third category of the categorical predictor variable $x$.



          Creating the suggested plots using R and simulated data



          Note that the effects package in R will create plots similar to what I suggested here, except that the plots will NOT show the observed values of $y$ corresponding to each category of $x$ and will display uncertainty intervals (i.e., 95% confidence intervals) around the plotted (estimated) probabilities. Simply use these commands:



          install.packages("effects")
          library(effects)

          model <- glm(y ~ x, data = data, family = "binomial")

          plot(allEffects(model))


          For the data in your post, this would be:



          library(effects)

          set.seed(2019)

          n = 1000

          y = as.factor(rbinom(2*n, 1, 0.6))

          x = as.factor(rbinom(2*n, 1, 0.6))

          df = data.frame(x=x,y=y)

          model <- glm(y ~ x, data = df, family = "binomial")

          plot(allEffects(model),ylim=c(0,1))


          The resulting plot can be seen at https://m.imgur.com/klwame5.






          share|cite|improve this answer











          $endgroup$








          • 1




            $begingroup$
            Thank you! Very well explained - if you don't mind I have made an edit to you post which you may consider here (it's simply formatting, nothing about actual content, perhaps to make things more obvious to future readers) : vpaste.net/iYflt . If possible Isabella I would appreciate a conderation about how the logistic curve ties in here, it feels that the thought of fitting it doesn't apply in the same way. If it makes no sense to try and think about it in these terms then I would be happy to hear that.
            $endgroup$
            – baxx
            1 hour ago










          • $begingroup$
            @baxx: Thank you for your edits - I incorporated some in my post, as suggested. I doubt anyone other than yourself would find my answer useful - if you understood it, that's all that matters. The only "curve" you can speak of is that which connects the estimated probabilities in an effects plot such as the one you suggested. But that "curve" is simply useful as a visual aid to help you judge whether one estimated probability is smaller/larger than other(s).
            $endgroup$
            – Isabella Ghement
            3 mins ago










          • $begingroup$
            Because x is categorical, there is really no "curve" to speak of in technical terms - we don't know what the probability would look like "in between" categories of x and not even if that probability would be defined there.
            $endgroup$
            – Isabella Ghement
            2 mins ago













          1












          1








          1





          $begingroup$

          Overview of Binary Logistic Regression Using a Continuous Predictor



          A binary logistic regression model with continuous predictor variable $x$ has the form:



          log(odds that y = 1) = beta0 + beta1 * x (A)


          According to this model, the continuous predictor variable $x$ has a linear effect on the log odds that the binary response variable $y$ is equal to 1 (rather than 0).



          One can easily show this model to be equivalent to the following model:



           (probability that y = 1) =

          exp(beta0 + beta1 * x)/[1 + exp(beta0 + beta1 * x)] (B)


          In the equivalent model, the continuous predictor $x$ has a nonlinear effect on the probability that $y = 1$.



          In the plot that you shared, the S-shaped blue curve is obtained by plotting the right hand side of equation (B) above as a function of $x$ and shows how the probability that $y = 1$ increases (nonlinearly) as the values of $x$ increase.



          BinaryLogistic Regression Using a Categorical Predictor with Two Categories, Whose Effect is Encoded Using a Dummy Variable



          If your x variable were a categorical predictor with, say, $2$ categories, then it would be coded via a dummy variable $x$ in your model, such that x = 0 for the first (or reference) category and $x = 1$ for the second (or non-reference) category. In that case, your binary logistic regression model would still be expressed as in equation (B). However, since $x$ is a dummy variable, the model would be simplified as:



          log(odds that y = 1) = beta0 for the reference category of x (C1) 


          and



          log(odds that y = 1) = beta0 + beta1 for the non-reference category of x (C2)


          The equations (C1) and (C2) can be further manipulated and re-expressed as:



          (probability that y = 1) = exp(beta0)/[1 + exp(beta0)] for the reference category of x (D1)


          and



          (probability that y = 1) =
          exp(beta0 + beta1)/[1 + exp(beta0 + beta1)] for the non-reference category of x (D2)


          What Is the Utility of a Binary Logistic Regression with a Categorical Predictor with Two Categories?



          So what is the utility of the binary logistic regression when $x$ is a dummy variable?



          The model allows you to estimate two different probabilities that $y = 1$: one for $x = 0$ (as per equation (D1)) and one for $x = 1$ (as per equation (D2)).



          You could create a plot to visualize these two probabilities as a function of $x$ and superimpose the observed values of $y$ for $x = 0$ (i.e., a whole bunch of zeroes and ones sitting on top of $x = 0$) and for $x = 1$ (i.e., a whole bunch of zeroes and ones sitting on top of $x = 1$). The plot would look like this:



           ^
          |
          y = 1 | 1 1
          |
          | *
          |
          | *
          |
          y = 0 | 0 0
          |
          |------------------>
          x = 0 x = 1
          x-axis


          In this plot, you can see the zero values (i.e., $y = 0$) stacked atop $x = 0$ and $x = 1$, as well as the one values (i.e., $y = 1$) stacked atop $x = 0$ and $x = 1$. The * symbols denote the estimated values of the probability that $y = 1$.



          There are no more curves in this plot as you are just estimating two distinct probabilities.



          If you wanted to, you could connect these estimated probabilities with a straight line to indicate whether the estimated probability that $y = 1$ increases or decreases when you move from $x = 0$ to $x = 1$. Of course, you could also jitter the zeroes and ones shown in the plot to avoid plotting them right on top of each other.



          What Is the Utility of a Binary Logistic Regression with a Categorical Predictor with More Than Two Categories?



          If your categorical predictor variable $x$ has $k$ categories, where $k > 2$, then your model would include $k - 1$ dummy variables and could be written to make it clear that it estimates $k$ distinct probabilities that $y = 1$ (one for each category of $x$). You could visualize the estimated probabilities by extending the plot shown above to incorporate $k$ categories for $x$. For example, if $k = 3$, the plot would look like this:



           ^
          |
          y = 1| 1 1 1
          | *
          | *
          |
          | *
          |
          y = 0| 0 0 0
          |
          |---------------------------------->
          x = 1st x = 2nd x = 3rd
          x-axis


          where 1st, 2nd and 3rd refer to the first, second and third category of the categorical predictor variable $x$.



          Creating the suggested plots using R and simulated data



          Note that the effects package in R will create plots similar to what I suggested here, except that the plots will NOT show the observed values of $y$ corresponding to each category of $x$ and will display uncertainty intervals (i.e., 95% confidence intervals) around the plotted (estimated) probabilities. Simply use these commands:



          install.packages("effects")
          library(effects)

          model <- glm(y ~ x, data = data, family = "binomial")

          plot(allEffects(model))


          For the data in your post, this would be:



          library(effects)

          set.seed(2019)

          n = 1000

          y = as.factor(rbinom(2*n, 1, 0.6))

          x = as.factor(rbinom(2*n, 1, 0.6))

          df = data.frame(x=x,y=y)

          model <- glm(y ~ x, data = df, family = "binomial")

          plot(allEffects(model),ylim=c(0,1))


          The resulting plot can be seen at https://m.imgur.com/klwame5.






          share|cite|improve this answer











          $endgroup$



          Overview of Binary Logistic Regression Using a Continuous Predictor



          A binary logistic regression model with continuous predictor variable $x$ has the form:



          log(odds that y = 1) = beta0 + beta1 * x (A)


          According to this model, the continuous predictor variable $x$ has a linear effect on the log odds that the binary response variable $y$ is equal to 1 (rather than 0).



          One can easily show this model to be equivalent to the following model:



           (probability that y = 1) =

          exp(beta0 + beta1 * x)/[1 + exp(beta0 + beta1 * x)] (B)


          In the equivalent model, the continuous predictor $x$ has a nonlinear effect on the probability that $y = 1$.



          In the plot that you shared, the S-shaped blue curve is obtained by plotting the right hand side of equation (B) above as a function of $x$ and shows how the probability that $y = 1$ increases (nonlinearly) as the values of $x$ increase.



          BinaryLogistic Regression Using a Categorical Predictor with Two Categories, Whose Effect is Encoded Using a Dummy Variable



          If your x variable were a categorical predictor with, say, $2$ categories, then it would be coded via a dummy variable $x$ in your model, such that x = 0 for the first (or reference) category and $x = 1$ for the second (or non-reference) category. In that case, your binary logistic regression model would still be expressed as in equation (B). However, since $x$ is a dummy variable, the model would be simplified as:



          log(odds that y = 1) = beta0 for the reference category of x (C1) 


          and



          log(odds that y = 1) = beta0 + beta1 for the non-reference category of x (C2)


          The equations (C1) and (C2) can be further manipulated and re-expressed as:



          (probability that y = 1) = exp(beta0)/[1 + exp(beta0)] for the reference category of x (D1)


          and



          (probability that y = 1) =
          exp(beta0 + beta1)/[1 + exp(beta0 + beta1)] for the non-reference category of x (D2)


          What Is the Utility of a Binary Logistic Regression with a Categorical Predictor with Two Categories?



          So what is the utility of the binary logistic regression when $x$ is a dummy variable?



          The model allows you to estimate two different probabilities that $y = 1$: one for $x = 0$ (as per equation (D1)) and one for $x = 1$ (as per equation (D2)).



          You could create a plot to visualize these two probabilities as a function of $x$ and superimpose the observed values of $y$ for $x = 0$ (i.e., a whole bunch of zeroes and ones sitting on top of $x = 0$) and for $x = 1$ (i.e., a whole bunch of zeroes and ones sitting on top of $x = 1$). The plot would look like this:



           ^
          |
          y = 1 | 1 1
          |
          | *
          |
          | *
          |
          y = 0 | 0 0
          |
          |------------------>
          x = 0 x = 1
          x-axis


          In this plot, you can see the zero values (i.e., $y = 0$) stacked atop $x = 0$ and $x = 1$, as well as the one values (i.e., $y = 1$) stacked atop $x = 0$ and $x = 1$. The * symbols denote the estimated values of the probability that $y = 1$.



          There are no more curves in this plot as you are just estimating two distinct probabilities.



          If you wanted to, you could connect these estimated probabilities with a straight line to indicate whether the estimated probability that $y = 1$ increases or decreases when you move from $x = 0$ to $x = 1$. Of course, you could also jitter the zeroes and ones shown in the plot to avoid plotting them right on top of each other.



          What Is the Utility of a Binary Logistic Regression with a Categorical Predictor with More Than Two Categories?



          If your categorical predictor variable $x$ has $k$ categories, where $k > 2$, then your model would include $k - 1$ dummy variables and could be written to make it clear that it estimates $k$ distinct probabilities that $y = 1$ (one for each category of $x$). You could visualize the estimated probabilities by extending the plot shown above to incorporate $k$ categories for $x$. For example, if $k = 3$, the plot would look like this:



           ^
          |
          y = 1| 1 1 1
          | *
          | *
          |
          | *
          |
          y = 0| 0 0 0
          |
          |---------------------------------->
          x = 1st x = 2nd x = 3rd
          x-axis


          where 1st, 2nd and 3rd refer to the first, second and third category of the categorical predictor variable $x$.



          Creating the suggested plots using R and simulated data



          Note that the effects package in R will create plots similar to what I suggested here, except that the plots will NOT show the observed values of $y$ corresponding to each category of $x$ and will display uncertainty intervals (i.e., 95% confidence intervals) around the plotted (estimated) probabilities. Simply use these commands:



          install.packages("effects")
          library(effects)

          model <- glm(y ~ x, data = data, family = "binomial")

          plot(allEffects(model))


          For the data in your post, this would be:



          library(effects)

          set.seed(2019)

          n = 1000

          y = as.factor(rbinom(2*n, 1, 0.6))

          x = as.factor(rbinom(2*n, 1, 0.6))

          df = data.frame(x=x,y=y)

          model <- glm(y ~ x, data = df, family = "binomial")

          plot(allEffects(model),ylim=c(0,1))


          The resulting plot can be seen at https://m.imgur.com/klwame5.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited 10 mins ago

























          answered 1 hour ago









          Isabella GhementIsabella Ghement

          7,638422




          7,638422







          • 1




            $begingroup$
            Thank you! Very well explained - if you don't mind I have made an edit to you post which you may consider here (it's simply formatting, nothing about actual content, perhaps to make things more obvious to future readers) : vpaste.net/iYflt . If possible Isabella I would appreciate a conderation about how the logistic curve ties in here, it feels that the thought of fitting it doesn't apply in the same way. If it makes no sense to try and think about it in these terms then I would be happy to hear that.
            $endgroup$
            – baxx
            1 hour ago










          • $begingroup$
            @baxx: Thank you for your edits - I incorporated some in my post, as suggested. I doubt anyone other than yourself would find my answer useful - if you understood it, that's all that matters. The only "curve" you can speak of is that which connects the estimated probabilities in an effects plot such as the one you suggested. But that "curve" is simply useful as a visual aid to help you judge whether one estimated probability is smaller/larger than other(s).
            $endgroup$
            – Isabella Ghement
            3 mins ago










          • $begingroup$
            Because x is categorical, there is really no "curve" to speak of in technical terms - we don't know what the probability would look like "in between" categories of x and not even if that probability would be defined there.
            $endgroup$
            – Isabella Ghement
            2 mins ago












          • 1




            $begingroup$
            Thank you! Very well explained - if you don't mind I have made an edit to you post which you may consider here (it's simply formatting, nothing about actual content, perhaps to make things more obvious to future readers) : vpaste.net/iYflt . If possible Isabella I would appreciate a conderation about how the logistic curve ties in here, it feels that the thought of fitting it doesn't apply in the same way. If it makes no sense to try and think about it in these terms then I would be happy to hear that.
            $endgroup$
            – baxx
            1 hour ago










          • $begingroup$
            @baxx: Thank you for your edits - I incorporated some in my post, as suggested. I doubt anyone other than yourself would find my answer useful - if you understood it, that's all that matters. The only "curve" you can speak of is that which connects the estimated probabilities in an effects plot such as the one you suggested. But that "curve" is simply useful as a visual aid to help you judge whether one estimated probability is smaller/larger than other(s).
            $endgroup$
            – Isabella Ghement
            3 mins ago










          • $begingroup$
            Because x is categorical, there is really no "curve" to speak of in technical terms - we don't know what the probability would look like "in between" categories of x and not even if that probability would be defined there.
            $endgroup$
            – Isabella Ghement
            2 mins ago







          1




          1




          $begingroup$
          Thank you! Very well explained - if you don't mind I have made an edit to you post which you may consider here (it's simply formatting, nothing about actual content, perhaps to make things more obvious to future readers) : vpaste.net/iYflt . If possible Isabella I would appreciate a conderation about how the logistic curve ties in here, it feels that the thought of fitting it doesn't apply in the same way. If it makes no sense to try and think about it in these terms then I would be happy to hear that.
          $endgroup$
          – baxx
          1 hour ago




          $begingroup$
          Thank you! Very well explained - if you don't mind I have made an edit to you post which you may consider here (it's simply formatting, nothing about actual content, perhaps to make things more obvious to future readers) : vpaste.net/iYflt . If possible Isabella I would appreciate a conderation about how the logistic curve ties in here, it feels that the thought of fitting it doesn't apply in the same way. If it makes no sense to try and think about it in these terms then I would be happy to hear that.
          $endgroup$
          – baxx
          1 hour ago












          $begingroup$
          @baxx: Thank you for your edits - I incorporated some in my post, as suggested. I doubt anyone other than yourself would find my answer useful - if you understood it, that's all that matters. The only "curve" you can speak of is that which connects the estimated probabilities in an effects plot such as the one you suggested. But that "curve" is simply useful as a visual aid to help you judge whether one estimated probability is smaller/larger than other(s).
          $endgroup$
          – Isabella Ghement
          3 mins ago




          $begingroup$
          @baxx: Thank you for your edits - I incorporated some in my post, as suggested. I doubt anyone other than yourself would find my answer useful - if you understood it, that's all that matters. The only "curve" you can speak of is that which connects the estimated probabilities in an effects plot such as the one you suggested. But that "curve" is simply useful as a visual aid to help you judge whether one estimated probability is smaller/larger than other(s).
          $endgroup$
          – Isabella Ghement
          3 mins ago












          $begingroup$
          Because x is categorical, there is really no "curve" to speak of in technical terms - we don't know what the probability would look like "in between" categories of x and not even if that probability would be defined there.
          $endgroup$
          – Isabella Ghement
          2 mins ago




          $begingroup$
          Because x is categorical, there is really no "curve" to speak of in technical terms - we don't know what the probability would look like "in between" categories of x and not even if that probability would be defined there.
          $endgroup$
          – Isabella Ghement
          2 mins ago

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Cross Validated!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f400452%2fhow-to-explain-the-utility-of-binomial-logistic-regression-when-the-predictors-a%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Францішак Багушэвіч Змест Сям'я | Біяграфія | Творчасць | Мова Багушэвіча | Ацэнкі дзейнасці | Цікавыя факты | Спадчына | Выбраная бібліяграфія | Ушанаванне памяці | У філатэліі | Зноскі | Літаратура | Спасылкі | НавігацыяЛяхоўскі У. Рупіўся дзеля Бога і людзей: Жыццёвы шлях Лявона Вітан-Дубейкаўскага // Вольскі і Памідораў з песняй пра немца Адвакат, паэт, народны заступнік Ашмянскі веснікВ Минске появится площадь Богушевича и улица Сырокомли, Белорусская деловая газета, 19 июля 2001 г.Айцец беларускай нацыянальнай ідэі паўстаў у бронзе Сяргей Аляксандравіч Адашкевіч (1918, Мінск). 80-я гады. Бюст «Францішак Багушэвіч».Яўген Мікалаевіч Ціхановіч. «Партрэт Францішка Багушэвіча»Мікола Мікалаевіч Купава. «Партрэт зачынальніка новай беларускай літаратуры Францішка Багушэвіча»Уладзімір Іванавіч Мелехаў. На помніку «Змагарам за родную мову» Барэльеф «Францішак Багушэвіч»Памяць пра Багушэвіча на Віленшчыне Страчаная сталіца. Беларускія шыльды на вуліцах Вільні«Krynica». Ideologia i przywódcy białoruskiego katolicyzmuФранцішак БагушэвічТворы на knihi.comТворы Францішка Багушэвіча на bellib.byСодаль Уладзімір. Францішак Багушэвіч на Лідчыне;Луцкевіч Антон. Жыцьцё і творчасьць Фр. Багушэвіча ў успамінах ягоных сучасьнікаў // Запісы Беларускага Навуковага таварыства. Вільня, 1938. Сшытак 1. С. 16-34.Большая российская1188761710000 0000 5537 633Xn9209310021619551927869394п

          Беларусь Змест Назва Гісторыя Геаграфія Сімволіка Дзяржаўны лад Палітычныя партыі Міжнароднае становішча і знешняя палітыка Адміністрацыйны падзел Насельніцтва Эканоміка Культура і грамадства Сацыяльная сфера Узброеныя сілы Заўвагі Літаратура Спасылкі НавігацыяHGЯOiТоп-2011 г. (па версіі ej.by)Топ-2013 г. (па версіі ej.by)Топ-2016 г. (па версіі ej.by)Топ-2017 г. (па версіі ej.by)Нацыянальны статыстычны камітэт Рэспублікі БеларусьШчыльнасць насельніцтва па краінахhttp://naviny.by/rubrics/society/2011/09/16/ic_articles_116_175144/А. Калечыц, У. Ксяндзоў. Спробы засялення краю неандэртальскім чалавекам.І ў Менску былі мамантыА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіГ. Штыхаў. Балты і славяне ў VI—VIII стст.М. Клімаў. Полацкае княства ў IX—XI стст.Г. Штыхаў, В. Ляўко. Палітычная гісторыя Полацкай зямліГ. Штыхаў. Дзяржаўны лад у землях-княствахГ. Штыхаў. Дзяржаўны лад у землях-княствахБеларускія землі ў складзе Вялікага Княства ЛітоўскагаЛюблінская унія 1569 г."The Early Stages of Independence"Zapomniane prawdy25 гадоў таму было аб'яўлена, што Язэп Пілсудскі — беларус (фота)Наша вадаДакументы ЧАЭС: Забруджванне тэрыторыі Беларусі « ЧАЭС Зона адчужэнняСведения о политических партиях, зарегистрированных в Республике Беларусь // Министерство юстиции Республики БеларусьСтатыстычны бюлетэнь „Полаўзроставая структура насельніцтва Рэспублікі Беларусь на 1 студзеня 2012 года і сярэднегадовая колькасць насельніцтва за 2011 год“Индекс человеческого развития Беларуси — не было бы нижеБеларусь занимает первое место в СНГ по индексу развития с учетом гендерного факцёраНацыянальны статыстычны камітэт Рэспублікі БеларусьКанстытуцыя РБ. Артыкул 17Трансфармацыйныя задачы БеларусіВыйсце з крызісу — далейшае рэфармаванне Беларускі рубель — сусветны лідар па дэвальвацыяхПра змену коштаў у кастрычніку 2011 г.Бядней за беларусаў у СНД толькі таджыкіСярэдні заробак у верасні дасягнуў 2,26 мільёна рублёўЭканомікаГаласуем за ТОП-100 беларускай прозыСучасныя беларускія мастакіАрхитектура Беларуси BELARUS.BYА. Каханоўскі. Культура Беларусі ўсярэдзіне XVII—XVIII ст.Анталогія беларускай народнай песні, гуказапісы спеваўБеларускія Музычныя IнструментыБеларускі рок, які мы страцілі. Топ-10 гуртоў«Мясцовы час» — нязгаслая легенда беларускай рок-музыкіСЯРГЕЙ БУДКІН. МЫ НЯ ЗНАЕМ СВАЁЙ МУЗЫКІМ. А. Каладзінскі. НАРОДНЫ ТЭАТРМагнацкія культурныя цэнтрыПублічная дыскусія «Беларуская новая пьеса: без беларускай мовы ці беларуская?»Беларускія драматургі па-ранейшаму лепш ставяцца за мяжой, чым на радзіме«Працэс незалежнага кіно пайшоў, і дзяржаву турбуе яго непадкантрольнасць»Беларускія філосафы ў пошуках прасторыВсе идём в библиотекуАрхіваванаАб Нацыянальнай праграме даследавання і выкарыстання касмічнай прасторы ў мірных мэтах на 2008—2012 гадыУ космас — разам.У суседнім з Барысаўскім раёне пабудуюць Камандна-вымяральны пунктСвяты і абрады беларусаў«Мірныя бульбашы з малой краіны» — 5 непраўдзівых стэрэатыпаў пра БеларусьМ. Раманюк. Беларускае народнае адзеннеУ Беларусі скарачаецца колькасць злачынстваўЛукашэнка незадаволены мінскімі ўладамі Крадзяжы складаюць у Мінску каля 70% злачынстваў Узровень злачыннасці ў Мінскай вобласці — адзін з самых высокіх у краіне Генпракуратура аналізуе стан са злачыннасцю ў Беларусі па каэфіцыенце злачыннасці У Беларусі стабілізавалася крымінагеннае становішча, лічыць генпракурорЗамежнікі сталі здзяйсняць у Беларусі больш злачынстваўМУС Беларусі турбуе рост рэцыдыўнай злачыннасціЯ з ЖЭСа. Дазволіце вас абкрасці! Рэйтынг усіх службаў і падраздзяленняў ГУУС Мінгарвыканкама вырасАб КДБ РБГісторыя Аператыўна-аналітычнага цэнтра РБГісторыя ДКФРТаможняagentura.ruБеларусьBelarus.by — Афіцыйны сайт Рэспублікі БеларусьСайт урада БеларусіRadzima.org — Збор архітэктурных помнікаў, гісторыя Беларусі«Глобус Беларуси»Гербы и флаги БеларусиАсаблівасці каменнага веку на БеларусіА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіУ. Ксяндзоў. Сярэдні каменны век (мезаліт). Засяленне краю плямёнамі паляўнічых, рыбакоў і збіральнікаўА. Калечыц, М. Чарняўскі. Плямёны на тэрыторыі Беларусі ў новым каменным веку (неаліце)А. Калечыц, У. Ксяндзоў, М. Чарняўскі. Гаспадарчыя заняткі ў каменным векуЭ. Зайкоўскі. Духоўная культура ў каменным векуАсаблівасці бронзавага веку на БеларусіФарміраванне супольнасцей ранняга перыяду бронзавага векуФотографии БеларусиРоля беларускіх зямель ва ўтварэнні і ўмацаванні ВКЛВ. Фадзеева. З гісторыі развіцця беларускай народнай вышыўкіDMOZGran catalanaБольшая российскаяBritannica (анлайн)Швейцарскі гістарычны15325917611952699xDA123282154079143-90000 0001 2171 2080n9112870100577502ge128882171858027501086026362074122714179пппппп

          ValueError: Expected n_neighbors <= n_samples, but n_samples = 1, n_neighbors = 6 (SMOTE) The 2019 Stack Overflow Developer Survey Results Are InCan SMOTE be applied over sequence of words (sentences)?ValueError when doing validation with random forestsSMOTE and multi class oversamplingLogic behind SMOTE-NC?ValueError: Error when checking target: expected dense_1 to have shape (7,) but got array with shape (1,)SmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced dataproblem of entry format for a simple model in KerasSVM SMOTE fit_resample() function runs forever with no result