I understand there's some sort of sporting contest going on, so I'll leave the door open for folks to talk about whatever for the evening.
Have at it, but keep it civil.
Sunday, February 5, 2017
Late last night the Trump regime appealed the federal court order halting its Muslim immigration ban to the 9th Circuit, demanding an immediate reinstatement based on the grounds of national security. Early this morning the 9th Circuit replied with "nope."
The decision from the US Court of Appeals for the Ninth Circuit means the bans will remain on hold through the weekend, with a decision on a further request to reinstate the ban put off until at least Monday afternoon.
Justice Department lawyers had gone to the appeals court earlier Saturday, asking for an immediate stay of a lower court’s order that stopped enforcement of major portion’s of Trump’s Jan. 27 executive order nationwide.
The request came a little more than 24 hours after the Friday night order from US District Judge James Robart halting enforcement of much of President Trump’s executive order temporarily stopping visa-holders from seven majority-Muslim countries from entering the US and shutting down the US refugee program temporarily and entrance of Syrian refugees indefinitely.
“Appellants’ request for an immediate administrative stay pending full consideration of the emergency motion for a stay pending appeal is denied,” the late-night order from the appeals court stated.
In addition to the immediate stay request, the Justice Department’s lawyers also asked for a stay pending the department’s appeal of Robart’s order. That request is still under consideration.
Lawyers from Washington and Minnesota, who had brought the challenge in Robart’s court in Seattle, have until 11:59 p.m Sunday Pacific Time to file a response to the Justice Department’s request. The Justice Department then has until 3:00 p.m. Pacific Time Monday to file a reply brief.
So far the judiciary is passing the test on whether or not we have three branches of government, or just one. How long that will last is anyone's guess. We'll see how Week Three of the Trump regime proceeds.
In a world where science moves quickly and science news moves even faster, a Dutch team of researchers is looking for a way to reform science itself by busting junk data that the world has taken for granted as fact. Meet the Statcheck Project.
One morning last summer, a German psychologist named Mathias Kauff woke up to find that he had been reprimanded by a robot. In an email, a computer program named Statcheck informed him that a 2013 paper he had published on multiculturalism and prejudice appeared to contain a number of incorrect calculations – which the program had catalogued and then posted on the internet for anyone to see. The problems turned out to be minor – just a few rounding errors – but the experience left Kauff feeling rattled. “At first I was a bit frightened,” he said. “I felt a bit exposed.”
Kauff wasn’t alone. Statcheck had read some 50,000 published psychology papers and checked the maths behind every statistical result it encountered. In the space of 24 hours, virtually every academic active in the field in the past two decades had received an email from the program, informing them that their work had been reviewed. Nothing like this had ever been seen before: a massive, open, retroactive evaluation of scientific literature, conducted entirely by computer.
Statcheck’s method was relatively simple, more like the mathematical equivalent of a spellchecker than a thoughtful review, but some scientists saw it as a new form of scrutiny and suspicion, portending a future in which the objective authority of peer review would be undermined by unaccountable and uncredentialed critics.
Susan Fiske, the former head of the Association for Psychological Science, wrote an op-ed accusing “self-appointed data police” of pioneering a new “form of harassment”. The German Psychological Society issued a statement condemning the unauthorised use of Statcheck. The intensity of the reaction suggested that many were afraid that the program was not just attributing mere statistical errors, but some impropriety, to the scientists.
The man behind all this controversy was a 25-year-old Dutch scientist named Chris Hartgerink, based at Tilburg University’s Meta-Research Center, which studies bias and error in science. Statcheck was the brainchild of Hartgerink’s colleague Michèle Nuijten, who had used the program to conduct a 2015 study that demonstrated that about half of all papers in psychology journals contained a statistical error. Nuijten’s study was written up in Nature as a valuable contribution to the growing literature acknowledging bias and error in science – but she had not published an inventory of the specific errors it had detected, or the authors who had committed them. The real flashpoint came months later,when Hartgerink modified Statcheck with some code of his own devising, which catalogued the individual errors and posted them online – sparking uproar across the scientific community.
Hartgerink is one of only a handful of researchers in the world who work full-time on the problem of scientific fraud – and he is perfectly happy to upset his peers. “The scientific system as we know it is pretty screwed up,” he told me last autumn. Sitting in the offices of the Meta-Research Center, which look out on to Tilburg’s grey, mid-century campus, he added: “I’ve known for years that I want to help improve it.” Hartgerink approaches his work with a professorial seriousness – his office is bare, except for a pile of statistics textbooks and an equation-filled whiteboard – and he is appealingly earnest about his aims. His conversations tend to rapidly ascend to great heights, as if they were balloons released from his hands – the simplest things soon become grand questions of ethics, or privacy, or the future of science.
“Statcheck is a good example of what is now possible,” he said. The top priority,for Hartgerink, is something much more grave than correcting simple statistical miscalculations. He is now proposing to deploy a similar program that will uncover fake or manipulated results – which he believes are far more prevalent than most scientists would like to admit.
Considering the kind of people who are making policy decisions based on bad science and misleading data, getting at the heart of data checking with powerful computers seems like a necessary step in order to establish rigorous fidelity in science.
But it's only a small part of the issue. The rest is political and financial, and until that changes, I fear Statcheck and its successors will also be manipulated by politicians and corporations.
Still, you have to start somewhere. More power to Hartgerink.