️Bogosity Podcast for 26 September 2021

Show video

Welcome to the Bogosity podcast for the week of 26 September 2021, the podcast that invented the multiversal socket wrench. This is your host, Shane Killian. Let's rasterize the News of the Bogus. We've talked about predictive policing before and how it's only good as the assumptions put into it, like when the TSA came up with predictive methods that would have pegged any given autistic as a terrorist threat. So when you have cops that have been repeatedly shown to be racist, it may SOUND like a good idea to develop algorithms that make those decisions for them, but when they're based on the PREVIOUS bad decisions of cops, it doesn't help. In fact, it potentially hurts, because now they have a high-tech gizmo to point to to gain verisimilitude.

That's the conclusion of a report from the National Association of Criminal Defense Lawyers perfectly titled "Garbage In, Gospel Out." They know more than anyone else how shiny things they can wave in front of a judge and jury can sway things much more than actual ways of getting to the real truth. The report found, quote: "The discussion surrounding big data policing programs often assumes that the police are the consumers, or the end users, of big data, when they themselves are generating much of the information upon which big data programs rely from the start. Prior to being fed into a predictive policing algorithm, crime data must first be observed, noticed, acted upon, collected, categorized, and recorded by the police. Therefore, every action – or refusal to act – on the part of a police officer, and every similar decision made by a police department, is also a decision about how and whether to generate data." In other words, the algorithm cannot be any better than police have been in the past.

And if police have been racist in the past, then racism gets baked into the algo. Quote: "If crime data is to be understood as a by-product of police activity, then any predictive algorithms trained on this data would be predicting future policing, not future crime...this functions as a self-fulfilling prophecy. Neighborhoods that have been disproportionately targeted by law enforcement in the past will be overrepresented in a crime dataset, and officers will become increasingly likely to patrol these same areas in order to observe new criminal acts that confirm their prior beliefs regarding the distributions of criminal activity. As the algorithm becomes increasingly confident that these locations are most likely to experience further criminal activity, the volume of arrests in these areas will continue to rise, fueling a never-ending cycle of distorted enforcement."

And the more it happens, the worse it gets, in a perpetual feedback loop. Police focus on black neighborhoods; those neighborhoods become considered "high crime" and are elevated by the algorithms. Police respond to the algorithm and police those areas more strongly.

This puts even more data into the algorithm causing it to increase the strength of these areas. On and on and on it goes. Which means that, in the future, even if you get rid of racist police, you won't get rid of racist POLICING because the decisions of the old racist cops are still influencing the algorithm, and therefore the decisions of the future cops. Quote: "The biases held by police officers and those reporting crimes, and correlations between attributes like race and arrest rates, will not only be recognized and replicated by the algorithm, but directly integrated into the software in a way that is subtle, unintentional, and difficult to correct, because it is often not the result of an active choice by the programmer." In fact, as we've covered before, it can be next to impossible to go into the algorithm to see exactly what it's doing.

And it has severe consequences for constitutional rights. Quote: "Data-driven policing raises serious questions for a Fourth Amendment analysis. Prior to initiating an investigative stop, law enforcement typically must have either reasonable suspicion or probable cause...The question then becomes: to what extent should

an algorithm be allowed to support a finding of probable cause or reasonable suspicion? Does a person loitering on a corner in an identified 'hotspot' translate to reasonable suspicion? What if an algorithm identified that person as a gang member or someone likely to be involved in drug dealing or gun violence? Can an algorithm alone ever satisfy the reasonable suspicion requirement?...The mere fact of conducting stops can cause algorithms to double down on a particular area as a hotspot, and then interpret the data about the stop as a further indication of a person’s dangerousness, resulting in more policing of the same neighborhoods and increased police encounters for the same population. The lack of transparency and clarity on the role that predictive algorithms play in supporting reasonable suspicion determinations could make it nearly impossible to identify a potential Fourth Amendment violation." As for the Fifth Amendment, quote: "the use of data-driven policing technologies is often not publicly disclosed. Even if the use of the technology is a matter of public record, the inputs used, training data and algorithms, are proprietary and therefore shielded from scrutiny. This raises a number of due process issues that implicate a person’s right to a fair trial...as of now, no one currently on the gang database list has a right to oppose it,

has a right to review it, has a right to even have an open hearing as to how they are able to now remove their names from that database." Sixth Amendment, quote: "Algorithmic tools often use claims of proprietary software and trade secrets to shield their technology from outside scrutiny. The companies that develop the tools conduct their own validation studies, rather than rely on independent verification and validation, which is the accepted practice. Allowing companies with a financial interest in the success of their tools to validate their own technologies with no outside scrutiny is scientifically suspect. It also frustrates any defense effort to challenge the reliability of the science underlying the novel software." Fourteenth: "Because any bias is filtered through an algorithm, critics have accused data-driven tools of 'techwashing' the biases inherent in the data.

While machine learning is not advanced enough to formulate intent, in the policing context, the unthinking use of algorithmic instruments will reinforce historical race-based patterns of policing. In order to address allegations of systemic bias in these algorithmic tools, attorneys will need to litigate the intent standard out of the Fourteenth Amendment analysis and insist on a disparate outcomes test when technology is involved." And the first: "When people are criminalized based on their associations and their participation on social media, they are subject to the 'surveillance tax.' The intrusiveness of surveillance extends beyond arrest: Knowledge of surveillance alone can inhibit our ability to engage in free expression, movement, and unconventional behavior." And although the report doesn't mention it, it would have an indirect effect on the Second as well.

Someone who lives in a high crime neighborhood has the greatest need for a firearm, but they're also more likely to be targeted for bogus police action--which means they're more likely to have arrests placed on their record, which means they're more likely to fail background checks when they go to purchase a firearm. Don't expect data-driven policing to look any different from regular policing. The only difference is a shiny thing they can wave in front of jurors while denying defense attorneys the ability to properly scrutinize the decisions of police officers. It's really making one wonder how extreme the demands of the content providers are going to get to even a single assertion of a copyright violation, but now you can up your estimate to THOUSANDS of British pounds. It started when British ISP Virgin Media was ordered by the High Court to hand over subscriber data to Voltage Holdings LLC. Despite the warnings of advocates, everyone was assured that the abuse of this data was nothing but the ravings of Internet conspiracy kooks and the big content companies would never, ever, ever use it to engage in threats and extortion.

Except it took less than a week for some of the ISP's subscribers to receive letters accusing them of pirating the movie Ava, although why anyone would want to is unexplained. But viewers were warned that if this goes to court for copyright infringement there'd be all sorts of mean, nasty, ugly things resulting from it and Voltage would really rather not see that happen, and aren't they so nice and kind and generous and considerate that way. So, they'll agree to drop the whole thing if they just admit to everything and pay a settlement fee. No amount is given, but early reports say Voltage has demanded several thousand pounds from subscribers. The main problem is, once again, the assumption that the person who pays for the service is the infringer, when it could be someone else at the house or a guest on the WiFi or whatever, meaning they're not liable for it at all! Something that Voltage actually acknowledges! Also, if the actual infringer is someone under the age of 18, then no legal action can be initiated against them. Same is true if the infringer is over the age of 65, as well as anyone who is "vulnerable," which I assume means things like disabled people.

All of which means that this is an extortion racket that, quite often, will be levied against people who are either innocent or cannot be sued! So much of this is the same old, same old, but other aspects are troubling. The fact that they're asking for so much money is an indication that they have something else up their sleeves. And as unscrupulous as these big media companies have been proven to be in the past, one shudders to contemplate what that might be. Not to mention the concern that British citizens can now, by court order, have their data pilfered and turned over to corporations before they've even been accused of anything at all. One of the big problems with advocates of solar and wind is that they never talk about the environmental damage they can do. It would be one thing if they were making the case, as nuclear advocates do, that the environmental damage is meager compared to the power they generate.

But environmentalists keep pretending that solar and wind are just so perfect and natural and have absolutely no environmental costs whatsoever. Not so, and there's an additional piece of evidence from this past February showing the waste from wind turbines. Theoretically, about 85% of the material made to make the blades for the turbines is recyclable.

They can be used as feedstock for cement, granulates of fiberglass, and many other uses. However, none of that means anything if it's not actually happening. And it isn't. Which is troubling, since global demand is rising--completely artificially as a result of misguided environmental policies passed by governments around the world. But the blades only have a 20 year operational life, and more and more they're being junked and buried in landfills, tens of thousands of them around the world.

And when you consider that many of them are longer than the wing of a 747, that means a LOT of landfill space! Made worse by the fact that not many landfills can or will accept them. Not only that, but they have to be trucked to the landfill space, which burns fossil fuels and releases carbon. And you can only fit one of them on a big rig. And they DON'T biodegrade. Although theoretically they can be recycled, the economics just isn't there. It can be far more costly to extract composites from discarded turbine blades than it is to obtain them from scratch.

It requires sawing through them with an industrial diamond saw JUST to cut them to the size where they can be transported. In fact, it's difficult to see what benefits will come from it at all, aside from making more money for Berkshire Hathaway. And now it's time to hypersensitize this week's Biggest Bogon Emitter. News media again. And if you remember about a week before the 2020 election, we debunked a lot of the bogosity around the Hunter Biden laptop scandal which the news media refused to make the scandal it clearly was, in order to preserve Joe Biden's chances of election. Everything I said at the time has just been confirmed: everything said by the news media to shrug off the story was a knowing and deliberate lie as the CIA, big tech, the Democratic Party, and the news media worked to censor and oppress the information about Biden's activities in Ukraine and China.

That included denigrating the report from the New York Post which posted obviously genuine photos of Hunter, proving it was his laptop, and emails that were verified not only by the digital signatures but by the other parties to the emails, including Tony Bobulinski, who said several times that they were genuine and they referenced Joe Biden's activities in Ukraine and China. The Biden's didn't even TRY to deny that the emails were genuine. But none of that stopped Obama's CIA director John Brennan and national intelligence director James Clapper organizing their spooks into a campaign of outright fabrications, including that the laptop was "Russian disinformation." That meant BOTH that the information originated from Russia AND it was faked. And they KNEW that both claims were false. And that campaign prompted social media companies like Twitter and Facebook to censor all coverage of the story.

Twitter even falsely claimed it was the result of hacking. It was this issue, if you recall, that got Glenn Greenwald kicked out of The Intercept, a news organization that he co-founded specifically so that stories like this wouldn't be suppressed. Now, Politico reporter Ben Schreckinger has publised a book called, "The Bidens: Inside the First Family’s Fifty-Year Rise to Power.” Schreckinger spent months investigating the New York Post documents and found definitive proof that those emails and related documents are indisputably authentic--ironic, given that it was Politico that was the first to publish the CIA's lies. Here we have a widespread political campaign of outright deceit to mislead the public about vital information regarding a prominent candidate in the days before an election, which ended up being one of the closest in history.

Hark! Do you hear ANY apologies, retractions, corrections, or acknowledgement of wrongdoing from any of these operatives in the news media? I think not. THIS is how elections get stolen, people. Not by hacking voting machines, or changing ballots, or stopping people from voting because they don't have an ID, or dumping tons of ballots into a landfill. It happens because the electorate is DELIBERATELY misinformed about the candidates they're asked to choose between.

So all of that makes the news media this week's Biggest Bogon Emitter. And now let's dehumidify this week's Idiot Extraordinaire. And this week, it goes to schools. Pretty much that: schools across the board.

And not even for the usual reason about how they miseducate children or whatever. No, it turns out, they leak TONS of information about children onto the Dark Web. NBC News did an analysis on files on hacker's sites and found they're chock full of information on children obtained from schools.

So far in 2021, ransomware gangs published information stolen from more than 1200 K-12 schools in America. And it includes personal data like medical conditions or the financial status of the family. It also includes permanently-identifiable information like Social Security numbers and birthdays. Many of those schools had no idea the information had been leaked, and parents have found themselves without much recourse. As if we needed any more reasons to hate both government schooling and government identification.

Schools are becoming more and more of a target for hackers who trade in people's data. But a lot of times, the information isn't even hacked; it's gleaned from data the schools publish on their websites and social media. Children are especially vulnerable this way, since they haven't taken any steps that would alert a bank or any other institution that something hinky is going on. Hackers can easily get credit cards and other assets using a child's identity and run up massive debt before they're even grown. Schools are supposed to protect our children. And this is one more way in which it's plain to see that they absolutely suck at their job.

So all of that makes schools this week's Idiot Extraordinaire. Well, that wraps up this "Hey, kid! You just won a new Ford Tippex!" edition of the Bogosity Podcast. I hope you enjoyed it; if you did, please go to donate.bogosity.tv for several ways to support, and discord.bogosity.tv to join the discussion. Subscribe at Patreon or Subscribestar and you can listen early and ad-free.

Thank you for listening. Until next time, here's a quote from Buckminster Fuller: "What usually happens in the educational process is that the faculties are dulled, overloaded, stuffed and paralyzed so that by the time most people are mature they have lost their innate capabilities." The Bogosity Podcast is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International license.

2021-09-28

Show video