Jump to content



Member Since 18 Oct 2009
Last Active Today, 07:00 AM

#3289275 Most sea ice ever recorded

Posted by mav1234 on 18 March 2015 - 11:22 AM

Talk about your false equivalencies....


Okay kids.... given the choice, would we trust:

  a) what someone else says someone's research says, or

  B) what the researcher says their research says?


If we trust what a researcher says their research says... We find that more researchers think their paper stats support for AGW than an abstract rating along would suggest.  Gaaaasssp!


That's the ONLY thing we can draw from the abstract ratings, precisely because the authors were given no chance to weigh in on Inconclusive. It is completely possible more of them would have marked their papers as inconclusive than what the abstract readings suggested but to think that is the case is an assumption, and a bad one.


It is good to see, though, that you have now admitted you would rather trust the researchers than a third party source.  With that, why don't you go and read other research on the subject?  Why not read the other "97%" studies?  Why not read the literature itself?

#3289270 Most sea ice ever recorded

Posted by mav1234 on 18 March 2015 - 11:20 AM

Going by what we have, did the authors have a means to rate their papers as "not enough data", "undecided", etc?


Going by what we have, did the authors deviate from the abstract ratings by the reporters?  Yes?  How often did this happen?  Well, just by the abstract ratings of 4, we deviated more than half the time!


Going by what we have, why is it so crazy to extrapolate these differences to the larger data set and then draw conclusions from what the authors said about their own work?


I agree.  Why is this so difficult to comprend?


If the authors had no means to rate it as "Inconclusive" or "No data," then we can not assume they intended to.  This is really basic stuff here.


What is funny is that the authors deviated from the abstract rating in a direction more towards their paper supporting AGW than neutrality. Do you understand what this means?  This suggests the abstract rating is more conservative than a full reading of a paper would suggest.

#3289243 Most sea ice ever recorded

Posted by mav1234 on 18 March 2015 - 10:59 AM

Truly, I'm not interested in how Cook et al rated the abstracts.  I'm interested in how the authors rated their own work. 


The question asked is "What are the chances that the purported 0.4% could happen?"  As the authors nixed the possibility of directly comparing to distributions of ratings for testing, I decided to do an extrapolation. 


To do this, I took the data set that you provided of the raw data.  I tallyed the abtracts that the reporters rated as a 4 where they also had a response from an author to rate their own work (1339).  I then took that data set to see how many of these deviated from what the reporters stated (757).  From this we would conclude that at least 55% of these papers had a stance of some sort and were incorrectly rated as not having any.  This number is likely bigger as it does not account for those ratings that the authors may have added as being "undecided", but we'll just roll with the conservative figure (knowing what we know).


Once that was done, I took the collected dataset to extrapolate a new table with the corrections for the mis-coding of data by the reporters.  That done, the null hypothesis that assumed the same distribution of ratings concurrent with the reporters projected a dataset like below:


Position                         Respondent Count             %

Endorse                                             20719         98.4

Uncertain                                                 87           0.4

Reject                                                    252           1.2


I then extrapolated a new table with the same total count, but no changing the results where the authors rated their work:


Position                            Respondent Count          %

Endorse                                               10188       48.4

Uncertain                                             10746       51.0

Reject                                                      124         0.6



Hrm.... what is the chance that these two data sets would have randomly fallen into these categories just by chance?  Would you like the number for that?



False equivalency.  No Position != Uncertain.  How are you having so much trouble with this?  It must really threaten your world view to be around. The authors did not rate their papers as uncertain.  You can not conclude that no position is uncertain.  This is crazy.  


The simple reason the authors of climate papers gave many of the papers a rating or 5+ when the abstract rating came back as a 4 is extremely simple.  The authors had the entire paper, the abstract raters had only the abstract.  This is absolutely simple, and is not a mis-coding on the part of the abstract raters, but rather, you are demonstrating both your lack of understanding of the methods and a desire to confirm your biases so strongly you are deliberately misrepresenting what was written. It is quite amusing. 


We have no way to extrapolate how many papers were Uncertain in the author rating scheme because authors were not asked to rate their papers as if the paper was inconclusive/uncertain.  Therefore, you are assuming that a No Position paper is an Uncertain paper and this is absolutely false.  I'm not sure how this is hard.  If you don't like the fact the author survey did not include Uncertain, just discount that and look at either any of the other 97%s in that article or any of the other articles out there, or hell, do some research yourself by reading abstracts.


Any of these would demonstrate you are wrong, so I doubt you will do them.

#3289144 Most sea ice ever recorded

Posted by mav1234 on 18 March 2015 - 09:37 AM

Okay.... tell us Philly and mav.... why would a t-test be inappropriate?  I realize that the majority of people on this board likely won't understand, but.... wow me.


t-tests calculate the probability that the data either are drawn from some population with some true hypothesized mean (one sample) or the probability that the data in two different groups are from the same population (two-sample).


both require numeric variables, sometimes called measurement variables. (edit: two sample t-tests also require some kind of categorical variable that has only 2 groups) The ratings of papers - despite being a number - is a categorical variable and must analyzed as such.  an easy (though not always perfect) way to figure out if something is a numeric variable to replace the numbers used with letters and see if it changes the interpretation.  so in this case, if we had categories a, b, c, da/db, e, f, g, the interpretation would indeed be the same as 1,2,3,4a/4b,5,6,7.  on the other hand if I go out and measure the height of 500 college students, it wouldn't make a ton of sense to change 5'11" to a, 5'10" to b, etc. in each because we lose information about magnitude differences, whereas 1/2/3/4a/4b/5/6/7 do not have true magnitudes, they merely correspond to roughly ordered categories with a meaningful break at 4a which is more equivalent to N/A than to a numeric meaning. Since these ratings then get further grouped into 3-4 categories depending on which table and whether is is abstract or self-rated, we further lose any semblance of a numeric variable.


if you were referring to my specific figure, you may have mistakenly thought there were two groups of numeric data, but they are categorical variables (No Opinion or Inconclusive) and counts of each, along with the year group they were from.


what you seemed to be proposing instead is something like a goodness of fit test, wherein we have some null expectation (apparently 99% as "Support AGW" - where did this come from?) of what we will find in papers along with other categories, and then you heaped your own bias into the number by combining 40% "No stance on the issue" with "inconclusive" for w/e reason.  No matter how much we wish they did, the authors of Cook et al did not poll authors of reviewed abstracts with the option for 4b. 


if you still doubt their findings, take a look at some of the abstracts they rated.  I did.  I also looked at the full texts.  Suffice to say, it is easy to see why so many authors that had papers in the "no stance" abstract group said their paper took a stance - the abstracts are often lacking references to AGW, possibly due to space concerns, but the intro usually says something about it even if their specific paper wasn't about testing how much humans influenced the climate.


You could also check other papers on the subject; this paper is not alone. It is not perfect, but it has results that match the literature.

#3288396 Most sea ice ever recorded

Posted by mav1234 on 17 March 2015 - 04:13 PM

With that being the case, then why is this data being discarded?  These are not papers that were arbitrarily picked.  They were picked specifically for their focus on climate change.  The self-ratings weren't arbitrary.  If I were to choose a rating that would most closely resemble "unknown", "indeterminate", "need more data", "inconclusive", etc.... on this scale from 1 to 7, what would I choose?


I would choose.... shocker.... 4!


Shockingly enough, in 2011, 40% of these papers were self-rated as 4.  Whatever do you suppose that means?


I would suppose that means exactly what the survey says it means: that 40% of authors chose option 4, indicating that their paper took no position.


Why should we assume anything about these authors? Let's look at Table 3.


You could validate your thoughts if you simply looked at the abstract ratings in Table 3.  I know this will be hard for you since it may challenge your confirmation bias, but just... try to peak at table 3.  Give it a go.  Try to.  I think you'll see what you're looking for.  If you are right, and the majority of authors actually tested and/or made statements relating to AGW, that would be reflected in their abstracts.  Go ahead and look.


It may even blow your mind - that these climate scientists saying the overwhelming majority of climate scientists agree AGW is real are actually telling the truth!


Want to check for yourself?  Go to http://www.skeptical...m/tcp.php?t=viz.  Check it out.  I've picked out a few random choices for you that were marked as no opinion: 1,2,3,4.  BTW, another fun game: Look at the abstracts picking no position, then click on the paper and look in the introduction.  Not surprisingly, quite a few of them begin with a line about AGW being a real problem in the intro itself.

#3288339 Most sea ice ever recorded

Posted by mav1234 on 17 March 2015 - 03:24 PM

@ teeray and mav


Take 2011 as the latest and greatest for evaluating the current pulse of self-rated papers.  ~60% were definitively self-rated as pro-AGW.  ~1% was self-rated as anti-AGW.  What was the other 40% with respect to self-rating?


From the text of the survey:


4 Neutral: paper doesn't address or mention issue of what's causing global warming

#3288309 Someone explain the: Iran/USA Agreement/Treaty issue.

Posted by mav1234 on 17 March 2015 - 03:01 PM

Would any of you "enlightened" folks, especially ones with small kids, be upset of your new neighbors moved in and had a couple of pit bulls roaming their yard?  I mean after all, they have a right to have one just like anyone else?  Or would you think that because pits kill and injure kids all around the world, that you would rather not have them right next to you?  Quit judging them just because they want what everyone else has.



I know, the whiner brigade is on its way with, "so you are comparing people to dogs?"


in this example do I already own like 500 pitbulls

#3288246 Most sea ice ever recorded

Posted by mav1234 on 17 March 2015 - 02:23 PM

If you were to run a t-test against that data set to calculate the probability that more than 99 percent of all scientists could conclusively take a stance on a position to produce consistent results (via hundreds of self-ratings) that come back with a rate of almost 40% of inconclusive/no opinion results papers limited in scope for the purpose of getting this very information, what do suppose that probability would be?


Here's a hint: it's WAY LESS than one half of one percent.  What you are purporting is lunacy on a yearly basis, much less over a span of two decades.


Your first paragraph is total nonsense.  These data are not appropriate for a t-test.


I don't know why you insist on saying "40% inconclusive/no opinion."  This is flat out NOT true.  There was no option of inconclusive in the author survey. You can't assume a paper was inconclusive because it expressed no opinion on the cause.  


I think you are confused about the search terms used to evaluate studies.  The search terms did not specify papers that addressed the cause of any climate change; many, many of the papers in the literature these days have moved past that point because of the consensus.  So now, they are testing other aspects of climate change and do not mention human actions in their abstracts (they are, after all, very limited in space).  Their studies themselves may still support AGW but the abstract doesn't mention it (hence the larger proportion of authors thinking their papers supported AGW than the abstracts alone might suggest).



40% = Statistically insignificant..... what else would you like to pawn off?


no he is talking about the 0.3% of all papers that were labeled as "undecided" due to abstract language use by the abstract reviews.  this is separate from the survey sent to the authors.  a very, very small number of papers in the time period examined were "undecided."

#3287899 Most sea ice ever recorded

Posted by mav1234 on 17 March 2015 - 11:46 AM

If you will direct your attention to Table 4 with its adjoining graphs in Figure 2 (yet again), you will find three lines labeled "Endorse AGW", "Reject AGW" and "No Position on AGW".  Each of these are tagged with the appropriate footnotes explaining what each mean.  "No Position on AGW" was tagged with footnote b.  Footnote b reads (VERBATIM):

      bUndecided self-rated papers have an average rating equal to 4


The highest percentage of papers during a single year of self-rating was before 2000 when it was hovering around 80%.  It was a smaller sample size, but it is what it is.  Since that time, the sample size grew larger, and the percentage of outright AGW endorsement dropped to what has leveled out to 60% of all self-rated papers up to 2011.


During this same span, "No Position on AGW" bottomed out at 20% before 2000 and then climed to 40% and leveled out shortly after 2000 and continued into 2011.  I haven't even gotten into questioning the methodology of the report and survey.  I am simply taking the data as presented by the authors and letting it stand on its on merits or lack thereof rather than accepting a spoonful of bullshit for an explanation. 


As to why this is proving too difficult to wrap your head around, I can't understand.  But apparently it is.  There is no amount of explaining away that changes what has been collected.  It is because of how this is being explained away that the credulity of the "experts" driving policy that I am not inclined to accept a necessarily ethical nor scientific professionalism when collecting, analyzing, testing/vetting and then presenting their findings.


twylyght,  read the methods.


No really, read them.


Note that they used a different survey for the authors than the abstracts.  Look at that survey.  Note that there is no 4b.  Realize that the caption in the table must be a mistake, then, or decide (for whatever reason) they have doctored this data file to take out 4b and 4a distinctions.


Realize that, in all likelihood, the authors realized their mistake in assuming that no position was unconvinced or vice versa after sending the authors the survey, and thus incorporated this into their abstract criterion posthoc.  This is why it is a subset.  This is consistent with the updates listed to the manuscript. 


Be sad the authors made a mistake, but recognize that only 4 out of 1000 abstracts CATEGORY 4 (aka no position) abstracts contained language consistent with Uncertainty.  This suggests what?  Hmm... There is little uncertainty in the field.  Who would have thought.


Concerned about those 4 all being in 2011?  Let's check it out in the data file.  Go for it, it's right here: http://www.skeptical...0noposition.txt


Don't worry, I also made a figure for you:




Realize that one shouldn't go on one study alone for anything.  Look at the other studies published in the area.  Realize they are all consistent.  Hmm. 

#3287635 John Oliver on the NCAA

Posted by mav1234 on 17 March 2015 - 09:16 AM

I'm totally okay with NCAA athletes being paid if athlete scholarships for those athletes are dissolved, and they instead have to compete in the pool of merit scholarships. (actually this would be fuged up, but I'm feeling argumentative today :P)


It annoys me that academic institutions have become minor leagues for certain professions.

#3287607 Most sea ice ever recorded

Posted by mav1234 on 17 March 2015 - 08:47 AM

Yes. This was spelled out in the conclusion of this report. To get this figure, the data designated "undecided" (again, the wording of the authors, not myself) is discarded. The ONLY way that this number is possible is to assume that one can either be FOR or AGAINST AGW underpinnings. That isn't science. That is pedagogy (as Philly no doubt couldn't wait to use in a post).


If you care to read over what I wrote again, I've never deviated from this.  I've stated exactly this.  What I've mentioned is how this becomes something entirely different when examined in a more holistic view of the data (especially since that is precisely the way this number is propagated at every turn).


And what I've stated all along is that there are multiple different methodological ways they approached the question of if a consensus exists, and each of them generate percentages above 95% - include Table 3, which accounts for this Unconvinced group.  If we do some very simple addition, we see that. 1% unconvinced + 1.9% reject AGJW + 97.1% support AGW =100%. 


There are other 97% figures throughout the paper that are calculated in different ways.  One even only looks at the specific search term "global climate change," which could conceivably be a less bias term to publish under than global warming, and indeed, that search hit had 97% of papers supporting AGW.  


One can take issue with a specific method they used - e.g., the author survey not including 4b - but that is why they presented so many examples that support the 97% point.  This study isn't perfect, but in conjunction with the other literature on the topic, it is strong support for an overwhelming consensus in the publishing community.


97% is not made up.  It was shown in other studies, as well - which you would know if you read the paper.


You can feel free to doubt if the composition of the 97% are the only true experts to consider in this field, as many climate "skeptics" do, but to deny that the consensus exists in the publishing climate science community is idiocy.


edit: and btw, no study is ever perfect. this one has it's flaws. that is why we take the body of literature - not just one paper - and draw conclusions from there.

#3287164 Most sea ice ever recorded

Posted by mav1234 on 16 March 2015 - 09:01 PM


I still believe there is bias in stating 30% makes up 100% opinion therefore it makes science sound like a democracy, sounds huddle cicrle jerking. I will not deny climate change but the percentage of the human factor playing a role I feel is overstated. I am no expert nor really have a strong opinion because this ranks low on my things of interest.


Can you explain what you mean RE: "30% makes up 100% opinion"?




Were papers analyzed that did not address climate change for this report? Of the respondents that answered the survey, were they not commenting on said papers?


Papers that did not study climate change at all were not included.  So no, the respondents were not being asked about papers that did not address climate change.


If you meant, were papers that did not state an opinion (or state a lack of certainty) towards AGW included in the authors that were emailed - they were.  This doesn't make up the 97% from Table 3 I mentioned, but it is worth delving into this group, too.


So, the objective abstract reviewers *read only the abstract.*  But the full text may have taken a stand, which is where the author responses matter.  ~2000 papers were "self rated" by authors, and in those "self ratings", Table 5 shows that authors tended to consider their papers more often to support AGW than an objective reading of just the abstract would suggest.  Furthermore, of authors that said their paper took a position, 97% said the paper supported AGW.  This is the second time this 97% comes up, and this number has been shown in previous studies, too.

#3287086 Most sea ice ever recorded

Posted by mav1234 on 16 March 2015 - 08:27 PM

Then what were the tables comparing?

At the root of this is a single question that has yet to be honestly answered. Given just the data in this report, do 97% of these scientists endorse a pro-AGW stance?


Tables 1 and 2 relate to methodology.


Table 3 gives the ratings that the objective reviewers gave to the abstracts of the papers, based on the rankings from the earlier table 2, divided into 4 categories (endorsement which included explicit or implicit endorsement of agw, no AGW position at all, rejection, and uncertainty).  That would mean that 3 of those categories were considered to have taken some position - either AGW, reject AGW, or uncertain.  Of the ~12k papers looked at, the vast majority took no stance on the causes of climate change (66.4%) in the abstract. Of the approx. 4k papers that expressed some kind of position (again, including uncertainty), ~97% of the articles endorsed AGW. 


Of not about Table 3 was that the ~12k papers came from ~30k authors. Of the papers that made a statement regarding the cause of AGW in the abstract, 98% of the authors were on papers that supported AGW.


Table 4 gives the "self ratings" by authors that responded to email surveys about their papers. 


Table 5 compares the objective abstract rating to the self rating.  Of note here is that authors that responded were more likely to say their papers explicitly supported AGW than the objective reading of an abstract would suggest.  This is not at all surprising.  Space in an abstract is limited and thus a lot of times we won't stick in information generally considered to be a consensus among the scientific community in the abstract if a study is not directly testing it, but it might occur in the intro itself.


The findings of this paper are consistent with previous findings of ~97% of publishing climate scientists ascribing to humans being responsible for all or a large part of AGW. 

#3286799 Most sea ice ever recorded

Posted by mav1234 on 16 March 2015 - 04:26 PM

Look at the self-ratings for 2011, how they were quantified, and how this compares to what you just put down.  Do they match?  Why do they not match?  Why are they not even close when put against the data from Table 5?



Table 3 isn't self ratings, so no, it wouldn't match Table 5.  


This is simple; the study design was twofold.  First, a group of "objective" reviewers assessed papers on a scale of 1 to 7.  Then, the authors of those papers were emailed and asked to self-assess.


Table 3 presents the objective reviewer outcomes.


Table 5 wouldn't necessarily match it, as often our feelings on what we write may not be identical to what an objective individual gathers from our abstracts - in part because sometimes what we think is in an abstract is in the main text, such as might be the case with AGW support.


edit: clarity...

#3286782 Most sea ice ever recorded

Posted by mav1234 on 16 March 2015 - 04:16 PM

According to the bolded text that you quoted yourself, this assumes that there is not room for scientists that are still amassing the dataErgo, that data was discarded to produce that result.


Unless you are asserting that "Undecided" or "I don't know" is not a valid opinion worth counting, then this conclusion is not representative of those respondents. 


Oh no, "undecided" or "I don't know" are certainly valid opinions worth counting, which is why they were.


The abstracts taking a position takes account of the 0.3% of abstracts with stated uncertainty.


It is absolutely ridiculous to assume that no statement is a statement of uncertainty.  That is not the case.


edit: removed R word after reading back a few pages to ecu's posts. :P