Is it ever recommended to use mean/multiple imputation when using tree-based predictive models?Orthogonal sets of variables in multiple imputation --> separate imputation models?Multiple Imputation Using Different Data Setsusing cluster information in multiple imputationMultiple Imputation for Spatial Modelsmultiple imputation models containing categorical variablesWhen to use multiple imputation chained equations vs regression to impute data?Multiple imputation when explained variance of imputation model is lowPredictive Mean Matching as Single Imputation?How to apply a model built using Multiple Imputation to predict on dataset with missing data?How NULLs in numerical variables are treated in tree-based models?

How difficult is it to simply disable/disengage the MCAS on Boeing 737 Max 8 & 9 Aircraft?

Fastest way to pop N items from a large dict

New passport but visa is in old (lost) passport

What is the significance behind "40 days" that often appears in the Bible?

Do I need life insurance if I can cover my own funeral costs?

Simplify an interface for flexibly applying rules to periods of time

What exactly is this small puffer fish doing and how did it manage to accomplish such a feat?

Knife as defense against stray dogs

Python if-else code style for reduced code for rounding floats

Is it insecure to send a password in a `curl` command?

When to use a slotted vs. solid turner?

Different outputs for `w`, `who`, `whoami` and `id`

Custom alignment for GeoMarkers

et qui - how do you really understand that kind of phraseology?

Bacteria contamination inside a thermos bottle

Four married couples attend a party. Each person shakes hands with every other person, except their own spouse, exactly once. How many handshakes?

Is it normal that my co-workers at a fitness company criticize my food choices?

Instead of a Universal Basic Income program, why not implement a "Universal Basic Needs" program?

What did “the good wine” (τὸν καλὸν οἶνον) mean in John 2:10?

Could this Scherzo by Beethoven be considered to be a fugue?

Is Manda another name for Saturn (Shani)?

How could a scammer know the apps on my phone / iTunes account?

How do I hide Chekhov's Gun?

Does .bashrc contain syntax errors?



Is it ever recommended to use mean/multiple imputation when using tree-based predictive models?


Orthogonal sets of variables in multiple imputation --> separate imputation models?Multiple Imputation Using Different Data Setsusing cluster information in multiple imputationMultiple Imputation for Spatial Modelsmultiple imputation models containing categorical variablesWhen to use multiple imputation chained equations vs regression to impute data?Multiple imputation when explained variance of imputation model is lowPredictive Mean Matching as Single Imputation?How to apply a model built using Multiple Imputation to predict on dataset with missing data?How NULLs in numerical variables are treated in tree-based models?













3












$begingroup$


Everytime that I am making some predictive model and I have missing data I impute categorical variables with something like "UNKNOWN" and numerical variables with some absurd number that will never be seen in practice (even if the variable is unbounded I can take the exponent of the variable and make the unknown values negative).



The main advantage is that the model knows that the variable is missing, which is not the case for say mean imputation. I can see that this could be disastrous in linear models or neural networks but in tree-based models this is handled really smoothly.



I know that there is a great deal of literature on missing data imputation, but when and why would I ever use these methods when missing data for predictive (tree-based) models?










share|cite|improve this question









$endgroup$











  • $begingroup$
    Imputing a large number for numeric data could be very bad for tree based models. Think of it this way, if your split is for example on income and the split is at say 100k, now everyone that was missing is going to be in the split with the high income earners
    $endgroup$
    – astel
    1 hour ago










  • $begingroup$
    The model will be fitted with that imputed values as well - so if they are significantly different than people with true high income the tree should make a split with true high and fake high (missing) income. If variability is low inside the tree node then there is not much to worry.
    $endgroup$
    – gsmafra
    1 hour ago















3












$begingroup$


Everytime that I am making some predictive model and I have missing data I impute categorical variables with something like "UNKNOWN" and numerical variables with some absurd number that will never be seen in practice (even if the variable is unbounded I can take the exponent of the variable and make the unknown values negative).



The main advantage is that the model knows that the variable is missing, which is not the case for say mean imputation. I can see that this could be disastrous in linear models or neural networks but in tree-based models this is handled really smoothly.



I know that there is a great deal of literature on missing data imputation, but when and why would I ever use these methods when missing data for predictive (tree-based) models?










share|cite|improve this question









$endgroup$











  • $begingroup$
    Imputing a large number for numeric data could be very bad for tree based models. Think of it this way, if your split is for example on income and the split is at say 100k, now everyone that was missing is going to be in the split with the high income earners
    $endgroup$
    – astel
    1 hour ago










  • $begingroup$
    The model will be fitted with that imputed values as well - so if they are significantly different than people with true high income the tree should make a split with true high and fake high (missing) income. If variability is low inside the tree node then there is not much to worry.
    $endgroup$
    – gsmafra
    1 hour ago













3












3








3


1



$begingroup$


Everytime that I am making some predictive model and I have missing data I impute categorical variables with something like "UNKNOWN" and numerical variables with some absurd number that will never be seen in practice (even if the variable is unbounded I can take the exponent of the variable and make the unknown values negative).



The main advantage is that the model knows that the variable is missing, which is not the case for say mean imputation. I can see that this could be disastrous in linear models or neural networks but in tree-based models this is handled really smoothly.



I know that there is a great deal of literature on missing data imputation, but when and why would I ever use these methods when missing data for predictive (tree-based) models?










share|cite|improve this question









$endgroup$




Everytime that I am making some predictive model and I have missing data I impute categorical variables with something like "UNKNOWN" and numerical variables with some absurd number that will never be seen in practice (even if the variable is unbounded I can take the exponent of the variable and make the unknown values negative).



The main advantage is that the model knows that the variable is missing, which is not the case for say mean imputation. I can see that this could be disastrous in linear models or neural networks but in tree-based models this is handled really smoothly.



I know that there is a great deal of literature on missing data imputation, but when and why would I ever use these methods when missing data for predictive (tree-based) models?







missing-data cart boosting data-imputation multiple-imputation






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked 1 hour ago









gsmafragsmafra

16518




16518











  • $begingroup$
    Imputing a large number for numeric data could be very bad for tree based models. Think of it this way, if your split is for example on income and the split is at say 100k, now everyone that was missing is going to be in the split with the high income earners
    $endgroup$
    – astel
    1 hour ago










  • $begingroup$
    The model will be fitted with that imputed values as well - so if they are significantly different than people with true high income the tree should make a split with true high and fake high (missing) income. If variability is low inside the tree node then there is not much to worry.
    $endgroup$
    – gsmafra
    1 hour ago
















  • $begingroup$
    Imputing a large number for numeric data could be very bad for tree based models. Think of it this way, if your split is for example on income and the split is at say 100k, now everyone that was missing is going to be in the split with the high income earners
    $endgroup$
    – astel
    1 hour ago










  • $begingroup$
    The model will be fitted with that imputed values as well - so if they are significantly different than people with true high income the tree should make a split with true high and fake high (missing) income. If variability is low inside the tree node then there is not much to worry.
    $endgroup$
    – gsmafra
    1 hour ago















$begingroup$
Imputing a large number for numeric data could be very bad for tree based models. Think of it this way, if your split is for example on income and the split is at say 100k, now everyone that was missing is going to be in the split with the high income earners
$endgroup$
– astel
1 hour ago




$begingroup$
Imputing a large number for numeric data could be very bad for tree based models. Think of it this way, if your split is for example on income and the split is at say 100k, now everyone that was missing is going to be in the split with the high income earners
$endgroup$
– astel
1 hour ago












$begingroup$
The model will be fitted with that imputed values as well - so if they are significantly different than people with true high income the tree should make a split with true high and fake high (missing) income. If variability is low inside the tree node then there is not much to worry.
$endgroup$
– gsmafra
1 hour ago




$begingroup$
The model will be fitted with that imputed values as well - so if they are significantly different than people with true high income the tree should make a split with true high and fake high (missing) income. If variability is low inside the tree node then there is not much to worry.
$endgroup$
– gsmafra
1 hour ago










1 Answer
1






active

oldest

votes


















2












$begingroup$

One reason you may not want to use "insert impossible value" methods is that means that your predictive model works conditional on the distribution of the data missingness remaining unchanged. Thus, if after building your tree model, it is realized that we can start using certain features more often, we can no longer use the model that was built using the "impute impossible value" method without retraining the model.



In fact, this problem is even further compounded if the rates of missingness changes during the data collection process itself. Then, even immediately after building the model, it is already "out of date", as the current rates of missingness will be different than the rates of missingness during when the data was collected.



To illustrate the issue, let's suppose a bank is building a database to help predict if clients will default on a loan. Early in the data collection process, loan officers have the option to conduct a background investigation, but they almost never do for clients they deem as trustworthy. Thus, for the especially trustworthy customers, the background check variable is almost always missing. If you use the "impute impossible value" method, having a possible value for background checks indicates high risk.



If background check rates don't change at all, then this "impute impossible value" method will likely still provide valid predictions. However, let's suppose the bank realizes that background checks are really helpful for assessing risk, so they change their policy to include background checks for everyone. Then, everyone will have a possible value for background checks and using the "impute impossible value" method, everyone will be flagged as "high risk".



Cross validation will not catch this issue, as the missingness distribution will be the same between the training and testing sets. So even though the "impute impossible value" method may lead to pretty results during cross-validation, this will lead to poor predictions upon deployment!



Note that you will essentially need to throw away all your data everytime your data collection policy changes! Alternatively, if you can correctly impute the missing values and their uncertainty, you can now use the data that was collected under the old policy.






share|cite|improve this answer











$endgroup$












  • $begingroup$
    That's a good point, imputation could be more robust on changes in the way data is missing. I will take your statement on throwing away past data as an exaggeration though - including a time variable and retraining the model should be enough make it usable again.
    $endgroup$
    – gsmafra
    29 mins ago










  • $begingroup$
    @gsmafra: In general, I don't think adding a time variable will fix the problem. For example, in a random forest, the time variable will only be included in 1/3 of the trees, so it won't even be included in the majority of the decision trees in your random forest.
    $endgroup$
    – Cliff AB
    21 mins ago










  • $begingroup$
    To be clear, I don't think you should throw out your data...but I'd only advise doing "impossible value imputation" on variables you don't think will be very predictive to start with or you're fairly certain that the missingness distribution is fairly stable.
    $endgroup$
    – Cliff AB
    20 mins ago











Your Answer





StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f397942%2fis-it-ever-recommended-to-use-mean-multiple-imputation-when-using-tree-based-pre%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









2












$begingroup$

One reason you may not want to use "insert impossible value" methods is that means that your predictive model works conditional on the distribution of the data missingness remaining unchanged. Thus, if after building your tree model, it is realized that we can start using certain features more often, we can no longer use the model that was built using the "impute impossible value" method without retraining the model.



In fact, this problem is even further compounded if the rates of missingness changes during the data collection process itself. Then, even immediately after building the model, it is already "out of date", as the current rates of missingness will be different than the rates of missingness during when the data was collected.



To illustrate the issue, let's suppose a bank is building a database to help predict if clients will default on a loan. Early in the data collection process, loan officers have the option to conduct a background investigation, but they almost never do for clients they deem as trustworthy. Thus, for the especially trustworthy customers, the background check variable is almost always missing. If you use the "impute impossible value" method, having a possible value for background checks indicates high risk.



If background check rates don't change at all, then this "impute impossible value" method will likely still provide valid predictions. However, let's suppose the bank realizes that background checks are really helpful for assessing risk, so they change their policy to include background checks for everyone. Then, everyone will have a possible value for background checks and using the "impute impossible value" method, everyone will be flagged as "high risk".



Cross validation will not catch this issue, as the missingness distribution will be the same between the training and testing sets. So even though the "impute impossible value" method may lead to pretty results during cross-validation, this will lead to poor predictions upon deployment!



Note that you will essentially need to throw away all your data everytime your data collection policy changes! Alternatively, if you can correctly impute the missing values and their uncertainty, you can now use the data that was collected under the old policy.






share|cite|improve this answer











$endgroup$












  • $begingroup$
    That's a good point, imputation could be more robust on changes in the way data is missing. I will take your statement on throwing away past data as an exaggeration though - including a time variable and retraining the model should be enough make it usable again.
    $endgroup$
    – gsmafra
    29 mins ago










  • $begingroup$
    @gsmafra: In general, I don't think adding a time variable will fix the problem. For example, in a random forest, the time variable will only be included in 1/3 of the trees, so it won't even be included in the majority of the decision trees in your random forest.
    $endgroup$
    – Cliff AB
    21 mins ago










  • $begingroup$
    To be clear, I don't think you should throw out your data...but I'd only advise doing "impossible value imputation" on variables you don't think will be very predictive to start with or you're fairly certain that the missingness distribution is fairly stable.
    $endgroup$
    – Cliff AB
    20 mins ago
















2












$begingroup$

One reason you may not want to use "insert impossible value" methods is that means that your predictive model works conditional on the distribution of the data missingness remaining unchanged. Thus, if after building your tree model, it is realized that we can start using certain features more often, we can no longer use the model that was built using the "impute impossible value" method without retraining the model.



In fact, this problem is even further compounded if the rates of missingness changes during the data collection process itself. Then, even immediately after building the model, it is already "out of date", as the current rates of missingness will be different than the rates of missingness during when the data was collected.



To illustrate the issue, let's suppose a bank is building a database to help predict if clients will default on a loan. Early in the data collection process, loan officers have the option to conduct a background investigation, but they almost never do for clients they deem as trustworthy. Thus, for the especially trustworthy customers, the background check variable is almost always missing. If you use the "impute impossible value" method, having a possible value for background checks indicates high risk.



If background check rates don't change at all, then this "impute impossible value" method will likely still provide valid predictions. However, let's suppose the bank realizes that background checks are really helpful for assessing risk, so they change their policy to include background checks for everyone. Then, everyone will have a possible value for background checks and using the "impute impossible value" method, everyone will be flagged as "high risk".



Cross validation will not catch this issue, as the missingness distribution will be the same between the training and testing sets. So even though the "impute impossible value" method may lead to pretty results during cross-validation, this will lead to poor predictions upon deployment!



Note that you will essentially need to throw away all your data everytime your data collection policy changes! Alternatively, if you can correctly impute the missing values and their uncertainty, you can now use the data that was collected under the old policy.






share|cite|improve this answer











$endgroup$












  • $begingroup$
    That's a good point, imputation could be more robust on changes in the way data is missing. I will take your statement on throwing away past data as an exaggeration though - including a time variable and retraining the model should be enough make it usable again.
    $endgroup$
    – gsmafra
    29 mins ago










  • $begingroup$
    @gsmafra: In general, I don't think adding a time variable will fix the problem. For example, in a random forest, the time variable will only be included in 1/3 of the trees, so it won't even be included in the majority of the decision trees in your random forest.
    $endgroup$
    – Cliff AB
    21 mins ago










  • $begingroup$
    To be clear, I don't think you should throw out your data...but I'd only advise doing "impossible value imputation" on variables you don't think will be very predictive to start with or you're fairly certain that the missingness distribution is fairly stable.
    $endgroup$
    – Cliff AB
    20 mins ago














2












2








2





$begingroup$

One reason you may not want to use "insert impossible value" methods is that means that your predictive model works conditional on the distribution of the data missingness remaining unchanged. Thus, if after building your tree model, it is realized that we can start using certain features more often, we can no longer use the model that was built using the "impute impossible value" method without retraining the model.



In fact, this problem is even further compounded if the rates of missingness changes during the data collection process itself. Then, even immediately after building the model, it is already "out of date", as the current rates of missingness will be different than the rates of missingness during when the data was collected.



To illustrate the issue, let's suppose a bank is building a database to help predict if clients will default on a loan. Early in the data collection process, loan officers have the option to conduct a background investigation, but they almost never do for clients they deem as trustworthy. Thus, for the especially trustworthy customers, the background check variable is almost always missing. If you use the "impute impossible value" method, having a possible value for background checks indicates high risk.



If background check rates don't change at all, then this "impute impossible value" method will likely still provide valid predictions. However, let's suppose the bank realizes that background checks are really helpful for assessing risk, so they change their policy to include background checks for everyone. Then, everyone will have a possible value for background checks and using the "impute impossible value" method, everyone will be flagged as "high risk".



Cross validation will not catch this issue, as the missingness distribution will be the same between the training and testing sets. So even though the "impute impossible value" method may lead to pretty results during cross-validation, this will lead to poor predictions upon deployment!



Note that you will essentially need to throw away all your data everytime your data collection policy changes! Alternatively, if you can correctly impute the missing values and their uncertainty, you can now use the data that was collected under the old policy.






share|cite|improve this answer











$endgroup$



One reason you may not want to use "insert impossible value" methods is that means that your predictive model works conditional on the distribution of the data missingness remaining unchanged. Thus, if after building your tree model, it is realized that we can start using certain features more often, we can no longer use the model that was built using the "impute impossible value" method without retraining the model.



In fact, this problem is even further compounded if the rates of missingness changes during the data collection process itself. Then, even immediately after building the model, it is already "out of date", as the current rates of missingness will be different than the rates of missingness during when the data was collected.



To illustrate the issue, let's suppose a bank is building a database to help predict if clients will default on a loan. Early in the data collection process, loan officers have the option to conduct a background investigation, but they almost never do for clients they deem as trustworthy. Thus, for the especially trustworthy customers, the background check variable is almost always missing. If you use the "impute impossible value" method, having a possible value for background checks indicates high risk.



If background check rates don't change at all, then this "impute impossible value" method will likely still provide valid predictions. However, let's suppose the bank realizes that background checks are really helpful for assessing risk, so they change their policy to include background checks for everyone. Then, everyone will have a possible value for background checks and using the "impute impossible value" method, everyone will be flagged as "high risk".



Cross validation will not catch this issue, as the missingness distribution will be the same between the training and testing sets. So even though the "impute impossible value" method may lead to pretty results during cross-validation, this will lead to poor predictions upon deployment!



Note that you will essentially need to throw away all your data everytime your data collection policy changes! Alternatively, if you can correctly impute the missing values and their uncertainty, you can now use the data that was collected under the old policy.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited 44 mins ago

























answered 1 hour ago









Cliff ABCliff AB

13.5k12567




13.5k12567











  • $begingroup$
    That's a good point, imputation could be more robust on changes in the way data is missing. I will take your statement on throwing away past data as an exaggeration though - including a time variable and retraining the model should be enough make it usable again.
    $endgroup$
    – gsmafra
    29 mins ago










  • $begingroup$
    @gsmafra: In general, I don't think adding a time variable will fix the problem. For example, in a random forest, the time variable will only be included in 1/3 of the trees, so it won't even be included in the majority of the decision trees in your random forest.
    $endgroup$
    – Cliff AB
    21 mins ago










  • $begingroup$
    To be clear, I don't think you should throw out your data...but I'd only advise doing "impossible value imputation" on variables you don't think will be very predictive to start with or you're fairly certain that the missingness distribution is fairly stable.
    $endgroup$
    – Cliff AB
    20 mins ago

















  • $begingroup$
    That's a good point, imputation could be more robust on changes in the way data is missing. I will take your statement on throwing away past data as an exaggeration though - including a time variable and retraining the model should be enough make it usable again.
    $endgroup$
    – gsmafra
    29 mins ago










  • $begingroup$
    @gsmafra: In general, I don't think adding a time variable will fix the problem. For example, in a random forest, the time variable will only be included in 1/3 of the trees, so it won't even be included in the majority of the decision trees in your random forest.
    $endgroup$
    – Cliff AB
    21 mins ago










  • $begingroup$
    To be clear, I don't think you should throw out your data...but I'd only advise doing "impossible value imputation" on variables you don't think will be very predictive to start with or you're fairly certain that the missingness distribution is fairly stable.
    $endgroup$
    – Cliff AB
    20 mins ago
















$begingroup$
That's a good point, imputation could be more robust on changes in the way data is missing. I will take your statement on throwing away past data as an exaggeration though - including a time variable and retraining the model should be enough make it usable again.
$endgroup$
– gsmafra
29 mins ago




$begingroup$
That's a good point, imputation could be more robust on changes in the way data is missing. I will take your statement on throwing away past data as an exaggeration though - including a time variable and retraining the model should be enough make it usable again.
$endgroup$
– gsmafra
29 mins ago












$begingroup$
@gsmafra: In general, I don't think adding a time variable will fix the problem. For example, in a random forest, the time variable will only be included in 1/3 of the trees, so it won't even be included in the majority of the decision trees in your random forest.
$endgroup$
– Cliff AB
21 mins ago




$begingroup$
@gsmafra: In general, I don't think adding a time variable will fix the problem. For example, in a random forest, the time variable will only be included in 1/3 of the trees, so it won't even be included in the majority of the decision trees in your random forest.
$endgroup$
– Cliff AB
21 mins ago












$begingroup$
To be clear, I don't think you should throw out your data...but I'd only advise doing "impossible value imputation" on variables you don't think will be very predictive to start with or you're fairly certain that the missingness distribution is fairly stable.
$endgroup$
– Cliff AB
20 mins ago





$begingroup$
To be clear, I don't think you should throw out your data...but I'd only advise doing "impossible value imputation" on variables you don't think will be very predictive to start with or you're fairly certain that the missingness distribution is fairly stable.
$endgroup$
– Cliff AB
20 mins ago


















draft saved

draft discarded
















































Thanks for contributing an answer to Cross Validated!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f397942%2fis-it-ever-recommended-to-use-mean-multiple-imputation-when-using-tree-based-pre%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Magento 2 duplicate PHPSESSID cookie when using session_start() in custom php scriptMagento 2: User cant logged in into to account page, no error showing!Magento duplicate on subdomainGrabbing storeview from cookie (after using language selector)How do I run php custom script on magento2Magento 2: Include PHP script in headerSession lock after using Cm_RedisSessionscript php to update stockMagento set cookie popupMagento 2 session id cookie - where to find it?How to import Configurable product from csv with custom attributes using php scriptMagento 2 run custom PHP script

Can not update quote_id field of “quote_item” table magento 2Magento 2.1 - We can't remove the item. (Shopping Cart doesnt allow us to remove items before becomes empty)Add value for custom quote item attribute using REST apiREST API endpoint v1/carts/cartId/items always returns error messageCorrect way to save entries to databaseHow to remove all associated quote objects of a customer completelyMagento 2 - Save value from custom input field to quote_itemGet quote_item data using quote id and product id filter in Magento 2How to set additional data to quote_item table from controller in Magento 2?What is the purpose of additional_data column in quote_item table in magento2Set Custom Price to Quote item magento2 from controller

How to solve knockout JS error in Magento 2 Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?(Magento2) knockout.js:3012 Uncaught ReferenceError: Unable to process bindingUnable to process binding Knockout.js magento 2Cannot read property `scopeLabel` of undefined on Product Detail PageCan't get Customer Data on frontend in Magento 2Magento2 Order Summary - unable to process bindingKO templates are not loading in Magento 2.1 applicationgetting knockout js error magento 2Product grid not load -— Unable to process binding Knockout.js magento 2Product form not loaded in magento2Uncaught ReferenceError: Unable to process binding “if: function()return (isShowLegend()) ” magento 2