Dennis Prager Thinks The Nazis Had Some Good Points, Actually

Dennis Prager doesn’t think the Nazi slogan, the 3 Ks is bad. The German slogan describes that a woman’s place should be limited to the children, kitchen, and church. The Majority Report crew discusses how they are not surprised by Prager’s view aligning with fascists. The MR crew talks about how Prager and those on the Right like him do not want women involved in politics and the hierarchy Nazis wanted to maintain.

Mitt Romney Is Inventing Policies for a Fantasy G.O.P.

Last year, as an alternative to the temporary expansion of the child tax credit under President Biden’s American Rescue Plan, Senator Mitt Romney of Utah introduced a plan to give every family a monthly benefit of up to $350 per child for children 5 and under and $250 per child for children 6 to 17. It was simple, generous (it included a payment before birth, too) and — on paper, at least — effective. According to the Niskanen Center, which helped devise the proposal, the Romney plan would cut overall child poverty by roughly a third and the deepest child poverty by half.

Republicans hated it. His Senate colleagues Marco Rubio and Mike Lee denounced Romney’s plan as “welfare assistance,” and called for “pro-work” policies to assist families. “An essential part of being pro-family is being pro-work,” the senators said. “Congress should expand the child tax credit without undercutting the responsibility of parents to work to provide for their families.”

Romney, who voted against Biden’s rescue package, went back to the drawing board and recently unveiled a less generous version of his plan aimed at winning Republican support in the Senate. In this iteration, which would fill the gap left by the expiration of the Biden expansion in December, a family with children would have to earn at least $10,000 per year to qualify for the full credit. Below that, the benefit would scale proportionally so that a family earning $5,000 per year would receive 50 percent of the credit. The most impoverished families would receive the smallest benefits.

This version of the child benefit, to use the lingo of Romney’s earlier conservative critics, would “reward work.”

And yet there’s little indication that any more than a token group of Republican lawmakers is interested in Romney’s latest proposal. There’s no appetite for it. For the vast majority of Republicans in Congress, passing a new child benefit is not the kind of work they came to Washington to do. (It should be said, though, that in the absence of the filibuster, that token group of Republicans plus most Democrats would be enough to pass the Romney bill or something like it.)

The hostile and then indifferent response to Romney’s child allowance from his Republican colleagues — as well as the nearly total absence of meaningfully pro-family legislation from conservative lawmakers — tells us something very important about the future of the pro-life cause in the Republican Party. But maybe not quite what you think.

In the weeks since the Supreme Court overturned Roe v. Wade, some conservatives and abortion opponents have, as Elaine Godfrey reported in The Atlantic, expressed the hope that their movement and political party would turn their attention to the material well-being of mothers, families and children. So far, that hope seems to be misplaced.

Free, now, to pursue whatever policies they’d like on abortion, most Republican lawmakers and anti-abortion activists appear to be focused on passing harsh new restrictions on reproductive autonomy and creating broad protections for “fetal life.”

Trigger laws and prior statutes have already made abortion illegal in roughly a dozen states. Legislators in Missouri and Texas want to pass laws that would extend their bans across state lines, to punish residents who go to other states to obtain abortions. South Carolina Republicans, likewise, have drafted legislation that would ban all abortions except to prevent the death of the mother and would prosecute anyone “conspiring to cause, or aiding or abetting, illegal abortion.” And an Ohio bill would recognize the “personhood” and constitutional rights of “all unborn human individuals from the moment of conception.”

What you won’t find passing anytime soon in any Republican-led state legislature are bills to reduce the cost of childbearing and child-rearing. At most, a few states that have or will ban abortion have extended postpartum care under Medicaid. But there are no major plans to improve coverage or provide new benefits. As a practical matter, the pro-welfare, anti-abortion politician does not exist, at least not in the Republican Party.

The policy correlation is, in fact, what you would expect it to be. As a rule, the states with the most generous safety nets and anti-poverty programs are also the states with the widest access to abortion and other reproductive health services. The states with the most restrictive abortion laws are also, as a rule, the states that do the least for families and children as a matter of public policy.

Another way to make this connection is simply to look at a map of states that continue to refuse to expand Medicaid under the Affordable Care Act and compare it with a map of states that outlaw (or effectively outlaw) abortion. The overlap fits the pattern.

This distance — between the rhetoric of “life” and the reality of conservative Republican governance — only looks like hypocrisy. In truth, it is perfectly consistent.

That’s because the Republican ideal of a “pro-family” agenda is girded on traditional hierarchies. Reproductive autonomy, up to and including the right to get an abortion, weakens hierarchies of gender. And the social safety net — especially one that extends directly to mothers and children — undermines the preferred conservative social order of isolated, atomized households kept in line through market discipline.

If the goal of abortion opponents and politicians is to encourage life and promote families, then, yes, their interests and priorities are at odds with their actions. But if the goal is a more rigid and hierarchical world of untrammeled patriarchal authority, then, well, things are pretty much going according to plan.

The 10 tactics of fascism | Jason Stanley | Big Think

Fascism is a cult of the leader, who promises national restoration in the face of supposed humiliation by immigrants, leftists, liberals, minorities, homosexuals, women, in the face of what the fascist leader says is a takeover of the country’s media, cultural institutions, schools by these forces.

Fascist movements typically, though not invariably, rest on an urban/rural divide. The cities are where there’s decadence, where the elites congregate, where there’s immigrants, and where there’s criminality.

Each of these individuals alone is not in and of itself fascist, but you have to worry when they’re all grouped together, seeing the other as less than. Those moments are the times when societies need to worry about fascism.

Read the video transcript: https://bigthink.com/videos/what-is-f…

 

Loyalty to the dominant group means law-abidingness.
06:00
And the minority group is by its nature not law-abiding.
06:06
Law and order in fascist politics means the members
06:10
of a minority group who accept their subservient role,
06:15
they’re law-abiding,
06:16
and the members of the dominant group
06:18
by their very nature are law-abiding.
06:20
By definition, the leader can’t violate law and order.
06:24
So law and order doesn’t mean justice.
06:27
Law and order doesn’t mean equality.
06:29
Law and order structures who’s legitimate and who’s not.
06:35
Everywhere around the world,
06:36
no matter what the situation is,
06:39
in very different socioeconomic conditions,
06:42
the fascist leader comes and tells you,
06:44
“Your women and children are under threat.
06:46
You need a strong man to protect your families.”
06:50
They make conservatives hysterically afraid
06:53
of transgender rights or homosexuality,
06:57
other ways of living.
06:58
These are not people trying to live their own lives.
07:02
They’re trying to destroy your life,
07:03
and they’re coming after your children.
07:05
What the fascist politician does is they take conservatives
07:09
who aren’t fascist at all, and they say,
07:11
“Look, I know you might not like my ways.
07:14
You might think I’m a womanizer.
07:16
You might think I’m violent in my rhetoric.
07:18
But you need someone like me now.
07:20
You need someone like me ’cause homosexuality,
07:23
it isn’t just trying for equality.
07:25
It’s coming after your family.”
07:29
Fascist movements typically, though not invariably,
07:33
rest on an urban/rural divide.
07:36
The cities are where there’s decadence,
07:38
where the elites congregate, where there’s immigrants,
07:42
there’s criminality, there’s Sodom and Gomorrah.
07:45
In the city, there’s not real work.
07:48
The pure, hard-working, real members of the nation live
07:53
in the rural areas, where they work hard with their hands.
07:57
When our politicians talk about inner-city voters
08:00
or urban voters, we all know what they mean.
08:05
Arbeit macht frei, “Work shall make you free.”
08:08
This was written on the gates of Auschwitz.
08:11
The idea is that the minority group, they’re lazy,
08:16
and they need to be made to work.
08:17
Free labor.
08:19
The minority group and the leftists,
08:21
they’re lazy by their nature,
08:23
and it gives them a work ethic.
08:25
Labor unions are run by communists
08:28
who are trying to make things easier.
08:30
Hard work is a virtue.
08:32
In liberal democracy,
08:34
we don’t value people by how hard they work.
08:37
What would happen to disabled people who can’t work?
08:40
They would then have no value.
08:41
It’s why the Nazis had the T4 program to murder the disabled
08:46
because the disabled were Lebenunwertes Leben,
08:50
life unworthy of life,
08:52
because to be valued was to be capable of hard work.
08:56
Each of these individual elements is not
08:58
in and of itself fascist,
09:00
but you have to worry when they’re all grouped together,
09:03
when honest conservatives are lured into fascism
09:06
by people who tell them, “Look, it’s an existential fight.
09:09
I know you don’t accept everything we do.
09:12
You don’t accept every doctrine.
09:14
But your family is under threat.
09:16
Your family is at risk.
09:17
So without us, you’re in peril.”
09:20
Those moments are the times
09:23
when we need to worry about fascism.

Why does a psychopath win every time against a narcissist?

A narcissist deals with shame by offloading it on to other people, a psychopath deals with it by repeatedly bashing your head on the pavement.

Psychopaths have deeply internalised shame which is dealt with by violence. It isn’t that psychopaths do not register shame, but that it is only a fleeting emotion, quickly replaced by rage.

Psychopathy-related personality traits and shame management strategies in adolescents – PubMed
The purpose of this study was to examine whether there is a correlation between the amount of psychopathy-related personality traits and the type of shame management in adolescents. Two hypotheses were examined; first, that there is a positive correlation between psychopathy-related personality trai …

Psychopaths are not the cool calm robots of Quora fairy tales, they are creatures of hair trigger rage. Attempting to shame a psychopath is incredibly foolhardy, and rather than harsh words, or feigned indifference, you may be met by very real violence.

Narcissists like to appear to be at the top of the pecking order by affecting the superficialities of power, psychopaths have a system of working out who is there by raw, real world might, physical or otherwise.

Psychopaths are hierachical, like reptiles or apes – you will fight for your position and inciting shame in another is a challenge for their spot, or a refutation of the status they appear to be claiming.

A narcissist in a group of psychopaths who attempted to shame or devalue them in any way would soon receive a physical challenge to their assumed superiority. With psychopaths it’s put up or shut up. Got to pay the cost to be the boss.

As Mac Davidson says, there are no narcissists in prison.

If you ever find yourself in the unfortunate situation of being surrounded by psychopaths, be very civil and polite, do your best not to act haughty or superior as they are very sensitive to shame. Psychopaths do not care about being thought to be a good person, they only care about where they sit in the pecking order. Chances are you’re at the bottom, so act accordingly. Self depreciation is your go to here.

Beat them with brains,

Robert

Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it.

Gebru is one of the most high-profile Black women in her field and a powerful voice in the new field of ethical AI, which seeks to identify issues around bias, fairness, and responsibility.

Two months ago, Google promoted Timnit Gebru, co-lead of a group focused on ethical artificial intelligence, after she earned a high score on her annual employee appraisal. Gebru is one of the most high-profile Black women in her field and a powerful voice in the new field of ethical AI, which seeks to identify issues around bias, fairness, and responsibility.

In his peer review of Gebru, Jeff Dean, the head of Google Artificial Intelligence, left only one comment when asked what she could do to have a greater impact, according to documents viewed by The Washington Post: Ensure that her team helps make a promising new software tool for processing human language “consistent with our AI Principles.”

In an email thanking Dean for his review, Gebru let him know that her team was already working on a paper about the ethical risks around the same language models, which are essential to understanding the complexity of language in search queries. On Oct. 20, Dean wrote that he wanted to see a draft, adding, “definitely not my area of expertise, but would definitely learn from reading it.”

Six weeks later, Google fired Gebru while she was on vacation.

I can’t imagine anybody else who would be safer than me,” Gebru, 37, said. “I was super visible. I’m well known in the research community, but also the regulatory space. I have a lot of grass-roots support — and this is what happened.”

In an internal memo that he later posted online explaining Gebru’s departure, Dean told employees that the paper “didn’t meet our bar for publication” and “ignored too much relevant research” on recent positive improvements to the technology. Gebru’s superiors had insisted that she and the other Google co-authors either retract the paper or remove their names. Employees in Google Research, the department that houses the ethical AI team, say authors who make claims about the benefits of large language models have not received the same scrutiny during the approval process as those who highlight the shortcomings.

Her abrupt firing shows that Google is pushing back on the kind of scrutiny that it claims to welcome, according to interviews with Gebru, current Google employees, and emails and documents viewed by The Post.

It raises doubts about Silicon Valley’s ability to self-police, especially when it comes to advanced technology that is largely unregulated and being deployed in the real world despite demonstrable bias toward marginalized groups. Already, AI systems shape decision-making in law enforcement, employment opportunity and access to health care worldwide.

That made Gebru’s perspective essential in a field that is predominantly White, Asian and male. Women made up only 15 percent of the AI research staff at Facebook and 10 percent at Google, according to a 2018 report in Wired magazine. At Google, Black women make up 1.6 percent of the workforce.

Although Google publicly celebrated Gebru’s work identifying problems with AI, it disenfranchised the work internally by keeping it hierarchically distinct from other AI initiatives, not heeding the group’s advice, and not creating an incentive structure to put in practice the ethical findings, Gebru and other employees said.

Google declined to comment, but noted that in addition to the dozen or so staff members on Gebru’s team, 200 employees are focused on responsible AI.

Google has said that it did not fire Gebru, but accepted her “resignation,” citing her request to explain who at Google demanded that the paper be retracted, according to Dean’s memo. The company also blamed an email Gebru wrote to an employee resource group for women and allies at Google working in AI as inappropriate for a manager. The message warned the group that pushing for diversity was no use until Google leadership took accountability.

Rumman Chowdhury, a former global lead for responsible AI at Accenture and chief executive of Parity, a start-up that helps companies figure out how to audit algorithms, said there is a fundamental lack of respect within the industry for work on AI ethics compared with equivalent roles in other industries, such as model risk managers in quantitative hedge funds or threat analysts in cybersecurity.

“It’s being framed as the AI optimists and the people really building the stuff [versus] the rest of us negative Nellies, raining on their parade,” Chowdhury said. “You can’t help but notice, it’s like the boys will make the toys and then the girls will have to clean up.”

Google, which for decades evangelized an office culture that embraced employee dissent, has fired outspoken workers in recent years and shut down forums for exchange and questioning.

Nearly 3,000 Google employees and more than 4,000 academics, engineers and industry colleagues have signed a petition calling Gebru’s termination an act of retaliation by Google. Last week, nine Democratic lawmakers, including Sens. Elizabeth Warren (Mass.) and Cory Booker (N.J.) and Rep. Yvette D. Clarke (N.Y.), sponsor of the Algorithmic Accountability Act, a bill that would require companies to audit and correct race and gender bias in its algorithms, sent a letter to Google chief executive Sundar Pichai asking the company to affirm its commitment to research freedom and diversity.

Like any good researcher, Gebru is comfortable in the gray areas. And she has been using her ouster as an opportunity to shed light on the black box of algorithmic accountability inside Google — annotating the company’s claims with contradictory data, drawing connections to larger systemic issues, and illuminating the way internal AI ethics efforts can break down without oversight or a change in incentives to corporate practices and power structures.

Big Tech dominates AI research around advancements in machine learning, image recognition, language translation — poaching talent from top universities, sponsoring conferences and publishing influential papers. In response to concerns about the way those technologies could be abused or compound bias, the industry ramped up funding and promotion of AI ethics initiatives, beginning around 2016.

Tech giants have made similar investments in shaping policy debate around antitrust and online privacy, as a way to ward off lawmakers. Pichai invoked Google’s AI principles in an interview in 2018 with The Post, arguing for self-regulation around AI.

Google created its Ethical AI group in 2018 as an outgrowth of an employee-led push to prioritize fairness in the company’s machine learning applications. Margaret Mitchell, Gebru’s co-lead, pitched the idea of a team of researchers investigating the long-term effects of AI and translating those findings into action to mitigate harm and risk.

The same year, Pichai released a broadly worded set of principles governing Google’s AI work after thousands of employees protested the company’s contract with the Pentagon to analyze surveillance imagery from drones. But Google, which requires privacy and security tests before any product launch, has not mandated an equivalent process for vetting AI ethics, employees say.

Gebru, whose family’s ethnic origins are in Eritrea, was born and raised in Ethiopia and came to Massachusetts as 16-year-old after receiving political asylum from the war between the two African countries. She began her career as an electrical engineer at Apple and received her PhD from the Stanford Artificial Intelligence Lab, studying computer vision under renowned computer scientist Fei-Fei Li, a former Google executive and now co-director of Stanford’s Human-Centered AI Institute, which receives funding from Google.

Gebru did her postdoctoral research at Microsoft Research as part of a group focused on accountability and ethics in AI. There, she and Joy Buolamwini, then a masters student at MIT Media Lab, co-wrote a groundbreaking 2018 study that found that commercial facial recognition tools sold by companies such as IBM and Microsoft were 99 percent accurate at identifying White males, but only 35 percent effective with Black women.

In June, IBM, Microsoft and Amazon announced that they would stop selling the software to law enforcement, which Dean credited to Gebru’s work. She also co-founded Black in AI, a nonprofit organization that increased the number Black attendees at the largest annual AI conference.

Gebru said that in 2018, Google recruited her with the promise of total academic freedom. She was unconvinced, but the company was opening its first artificial intelligence lab on the African continent in Accra, the capital of Ghana, and she wanted to be involved. When she joined Google, Gebru said, she was the first Black female researcher in the company. (When she left, there were still only a handful of Black women working in research, out of hundreds.)

Gebru said she was also drawn to working with Mitchell. Both women prioritized foresight and building practical solutions to prevent AI risk, whereas the operating mind-set in tech is biased toward benefits and “rapid hindsight,” in response to harm, Mitchell said.

Gebru’s approach to ethical AI was shaped by her experiences. Hardware, for instance, came with datasheets that documented whether components were safe to use in certain situations. “When you look at this field as a whole, that doesn’t exist,” said Gebru, an electrical engineer. “It’s just super behind in terms of documentation and standards of safety.”

She also leaned on her industry experience when collaborating with other teams. Engineers live on a product cycle, consumed with putting out fires and fixing bugs. A vague requirement to “make things fair” would only cause more work and frustration, she thought. So she tried to build institutional structures and documentation tools for “when people want to do the right thing.”

Despite their expertise, the Ethical AI group fought to be taken seriously and included in Google’s other AI efforts, employees said.

Within the company, Gebru and her former colleagues said, there is little transparency or accountability regarding how decisions around AI ethics or diversity initiatives get made. Work on AI principles, for instance, falls under Kent Walker, the senior vice president of global affairs, whose vast purview includes lobbying, public policy and legal work. Walker also runs an internal ethics board of top executives, including Dean, called the Advanced Technology Review Council, which is responsible for yes or no decisions when issues escalate, Gebru said. The Ethical AI team had to fight to be consulted on Walker’s initiatives, she said.

“Here’s the guy tasked with covering Google’s a–, lobbying and also … working on AI principles,” Gebru said. “Shouldn’t you have a different entity that pushes back a little bit internally — some sort of push and pull?” What’s more, members of Walker’s council are predominantly vice presidents or higher, constricting diversity, Gebru said.

In her conversations with product teams, such as a group working on fairness in Machine Learning Infrastructure, Gebru said she kept getting questions about what tools and features they could build to protect against the ethical risks involved with large language models. Google had credited it with the biggest breakthrough in improving search results in the past five years. The models can process words in relation to the other words that come before and after them, which is useful for understanding the intent behind conversational search queries.

But despite the increasing use of these models, there was limited research investigating groups that might be negatively impacted. Gebru says she wanted to help develop those safeguards, one of the reasons she agreed to collaborate with the research paper proposed by Emily M. Bender, a linguist at the University of Washington.

Mitchell, who developed the idea of model cards, like nutrition labels for machine learning models, described the paper as “due diligence.” Her model card idea is being adopted more widely across the industry, and engineers needed to know how to fill out the section on harm.

Gebru said her biggest contribution to both her team and the paper has been identifying researchers who study directly-affected communities.

That diversity was reflected in the authors of the paper, including Mark Diaz, a Black and Latino Google researcher whose previous work looked at how platforms leave out the elderly, who talk about ageism in blog posts, but don’t share as much on sites such as Twitter. For the paper, he identified the possibility that large data sets from the Internet, particularly if they are from a single moment in time, will not reflect cultural shifts from social movements, such as the #MeToo movement or Black Lives Matter, which seek to shift power through changes in language.

The paper identified four overarching categories of harm, according to a recent draft viewed by The Post. It delved into the environmental effect of the computing power, the inscrutability of massive data sets used to train these models, the opportunity costs of the “hype” around claims that these models can understand language, as opposed to identifying patterns, and the danger that the real-sounding text generated by such models could be used to spread misinformation.

Because Google depends on large language models, Gebru and Mitchell expected that the company might push back against certain sections or attempt to water down their findings. So they looped in PR & Policy representatives in mid-September, with plenty of time before the deadline for changes at the end of January 2021.

Before making a pre-publication draft available online, Gebru first wanted to vet the paper with a variety of experts, including those who have built large language models. She asked for feedback from two top people at OpenAI, an AI research lab co-founded by Elon Musk, in addition to her manager at Google, and about 30 others. They suggested additions or revisions, Gebru said. “I really wanted to send it to people who would disagree with our view and be defensive,” Gebru said.

Given all their upfront effort, Gebru was baffled when she received a notification for a meeting with Google Research Vice President Megan Kacholia at 4.30 p.m. on the Thursday before Thanksgiving.

At the meeting, Kacholia informed Gebru and her co-authors that Google wanted the paper retracted.

On Thanksgiving, a week after the meeting, Gebru composed a six-page email to Kacholia and Dean outlining how disrespectful and oddly secretive the process had been.

Specific individuals should not be empowered to unilaterally shut down work in such a disrespectful manner,” she wrote, adding that researchers from underrepresented groups were mistreated.

Mitchell, who is White, said she shared Gebru’s concerns but did not receive the same treatment from the company. “Google is very hierarchical, and it’s been a battle to have any sort of recognition,” she said. “We tried to explain that Timnit, and to a lesser extent me, are respected voices publicly, but we could not communicate upwards.”

How can you still ask why there aren’t Black women in this industry?” Gebru said.

Gebru said she found out this week that the paper was accepted to the Conference on Fairness, Accountability and Transparency, as part of its anonymous review process. “It’s sad, the scientific community respects us a lot more than anybody inside Google,” she said.