POL Machiavellian Scheme to make Biden look good in in the debates.

Troke

TB Fanatic
1. Because of the Chinese Plague, there can be no audience.

2. Thus the two candidates could just as well do it separated by whatever.

3. The two candidates agree on what the questions will be.

4. Using prompters and modern tech, Biden makes a tape of his answer to each question where he appears sharp, intelligent, focused, etc. And they keep doing it until he gets it right. Which will make quite a contrast to the standard Trumpian all over the landscape answer.

5. The tapes are played for the appropriate question.

I think with modern hi-tech, they could pull it off.
 

223shootersc

Veteran Member
The media will do anything to make Bedum look sane and even if he isn't in the debate they will line up "experts" afterwards to say how he beat our great president Trump like a drum.
 

Tripod

Senior Member
Doesn't matter. Give joe the questions and the answers and he will still screw it up and look like a buffoon. There is no easy way out for him except not show up.
Mike
 

summerthyme

Administrator
_______________
Have you seen some of his recent scripted ads? You have to assume they redid them multiple times... and he still screwed up. I don't think there are enough hours left before the election to get him to appear "sharp, intelligent and focused".

Now, green screen and a hologram, maybe...

Summerthyme
 

AlaskaSue

North to the Future
I'm afraid Troke has nailed it. This is what the MSM wants to feed to the people. But Summerthyme has a good point on the green screen... ;)
 

Practical

Senior Member
You will never see those two on a debate stage together. If you did, it's over. Remember tho, the networks, regardless of what they say about Biden, are not talking to you, they are talking to their acolytes. 'Be calm, all is well, look how good Biden was...'
 

adgal

Veteran Member
This is all about Deepfakes -
Fair use cited:


What Are Deepfakes and How Are They Created?
Deepfake technologies: What they are, what they do, and how they’re made
By Sally Adee
A journalist views a video manipulated with artificial intelligence to potentially deceive viewers, or deepfake at his newsdesk in Washington, DC
Photo: Alexandra Robinson/AFP/Getty Images
Editor's Picks
Still from a video showing a woman in a kitchen, the original is on the left, the deepfake on the right.
Facebook AI Launches Its Deepfake Detection Challenge

Abstract illustration of blue lines with a red eye
AI Deception: When Your Artificial Intelligence Learns to Lie

A comparison of an original and deepfake video of Facebook CEO Mark Zuckerberg.
World’s First Deepfake Audit Counts Videos and Tools on the Open Web

A growing unease has settled around evolving deepfake technologies that make it possible to create evidence of scenes that never happened. Celebrities have found themselves the unwitting stars of pornography, and politicians have turned up in videos appearing to speak words they never really said.
Concerns about deepfakes have led to a proliferation of countermeasures. New laws aim to stop people from making and distributing them. Earlier this year, social media platforms including Facebook and Twitter banned deepfakes from their networks. And computer vision and graphics conferences teem with presentations describing methods to defend against them.
So what exactly is a deepfake, and why are people so worried about them?
What is a deepfake?
Deepfake technology can seamlessly stitch anyone in the world into a video or photo they never actually participated in. Such capabilities have existed for decades—that’s how the late actor Paul Walker was resurrected for Fast & Furious 7. But it used to take entire studios full of experts a year to create these effects. Now, deepfake technologies—new automatic computer-graphics or machine-learning systems—can synthesize images and videos much more quickly.
There’s a lot of confusion around the term “deepfake,” though, and computer vision and graphics researchers are united in their hatred of the word. It has become a catchall to describe everything from state-of-the-art videos generated by AI to any image that seems potentially fraudulent.
A lot of what’s being called a deepfake simply isn’t: For example, a controversial “crickets” video of the U.S. Democratic primary debate released by the campaign of former presidential candidate Michael Bloomberg was made with standard video editing skills. Deepfakes played no role.
How deepfakes are created
The main ingredient in deepfakes is machine learning, which has made it possible to produce deepfakes much faster at a lower cost. To make a deepfake video of someone, a creator would first train a neural network on many hours of real video footage of the person to give it a realistic “understanding” of what he or she looks like from many angles and under different lighting. Then they’d combine the trained network with computer-graphics techniques to superimpose a copy of the person onto a different actor.
While the addition of AI makes the process faster than it ever would have been before, it still takes time for this process to yield a believable composite that places a person into an entirely fictional situation. The creator must also manually tweak many of the trained program’s parameters to avoid telltale blips and artifacts in the image. The process is hardly straightforward.
Many people assume that a class of deep-learning algorithms called generative adversarial networks (GANs) will be the main engine of deepfakes development in the future. GAN-generated faces are near-impossible to tell from real faces. The first audit of the deepfake landscape devoted an entire section to GANs, suggesting they will make it possible for anyone to create sophisticated deepfakes.
However, the spotlight on this particular technique has been misleading, says Siwei Lyu of SUNY Buffalo. “Most deepfake videos these days are generated by algorithms in which GANs don’t play a very prominent role,” he says.
GANs are hard to work with and require a huge amount of training data. It takes the models longer to generate the images than it would with other techniques. And—most important—GAN models are good for synthesizing images, but not for making videos. They have a hard time preserving temporal consistency, or keeping the same image aligned from one frame to the next.
The best-known audio “deepfakes” also don’t use GANs. When Canadian AI company Dessa (now owned by Square) used the talk show host Joe Rogan’s voice to utter sentences he never said, GANs were not involved. In fact, the lion’s share of today’s deepfakes are made using a constellation of AI and non-AI algorithms.

Who created deepfakes?

The most impressive deepfake examples tend to come out of university labs and the startups they seed: a widely reported video showing soccer star David Beckham speaking fluently in nine languages, only one of which he actually speaks, is a version of code developed at the Technical University of Munich, in Germany.

And MIT researchers have released an uncanny video of former U.S. President Richard Nixon delivering the alternate speech he had prepared for the nation had Apollo 11 failed.

But these are not the deepfakes that have governments and academics so worried. Deepfakes don’t have to be lab-grade or high-tech to have a destructive effect on the social fabric, as illustrated by nonconsensual pornographic deepfakes and other problematic forms.
Indeed, deepfakes get their very name from the ur-example of the genre, which was created in 2017 by a Reddit user calling himself r/deepfakes, who used Google’s open-source deep-learning library to swap porn performers’ faces for those of actresses. The codes inside DIY deepfakes found in the wild today are mostly descended from this original code—and while some might be considered entertaining thought experiments, none can be called convincing.
So why is everyone so worried? “Technology always improves. That’s just how it works,” says Hany Farid, a digital forensics expert at the University of California, Berkeley. There’s no consensus in the research community about when DIY techniques will become refined enough to pose a true threat—predictions vary wildly, from 2 to 10 years. But eventually, experts concur, anyone will be able to pull up an app on their smartphone and produce realistic deepfakes of anyone else.
What are deepfakes used for?
The clearest threat that deepfakes pose right now is to women—nonconsensual pornography accounts for 96 percent of deepfakes currently deployed on the Internet. Most target celebrities, but there are an increasing number of reports of deepfakes being used to create fake revenge porn, says Henry Ajder, who is head of research at the detection firm Deeptrace, in Amsterdam.
But women won’t be the sole targets of bullying. Deepfakes may well enable bullying more generally, whether in schools or workplaces, as anyone can place people into ridiculous, dangerous, or compromising scenarios.
Corporations worry about the role deepfakes could play in supercharging scams. There have been unconfirmed reports of deepfake audio being used in CEO scams to swindle employees into sending money to fraudsters. Extortion could become a major use case. Identity fraud was the top worry regarding deepfakes for more than three-quarters of respondents to a cybersecurity industry poll by the biometric firm iProov. Respondents’ chief concerns were that deepfakes would be used to make fraudulent online payments and hack into personal banking services.
For governments, the bigger fear is that deepfakes pose a danger to democracy. If you can make a female celebrity appear in a porn video, you can do the same to a politician running for reelection. In 2018, a video surfaced of João Doria, the governor of São Paulo, Brazil, who is married, participating in an orgy. He insisted it was a deepfake. There have been other examples. In 2018, the president of Gabon, Ali Bongo, who was long presumed unwell, surfaced on a suspicious video to reassure the population, sparking an attempted coup.
The ambiguity around these unconfirmed cases points to the biggest danger of deepfakes, whatever its current capabilities: the liar’s dividend, which is a fancy way of saying that the very existence of deepfakes provides cover for anyone to do anything they want, because they can dismiss any evidence of wrongdoing as a deepfake. It’s one-size-fits-all plausible deniability. “That is something you are absolutely starting to see: that liar’s dividend being used as a way to get out of trouble,” says Farid.
How do we stop malicious deepfakes?

Several U.S. laws regarding deepfakes have taken effect over the past year. States are introducing bills to criminalize deepfake pornography and prohibit the use of deepfakes in the context of an election. Texas, Virginia, and California have criminalized deepfake porn, and in December, the president signed the first federal law as part of the National Defense Authorization Act. But these new laws only help when a perpetrator lives in one of those jurisdictions.
Outside the United States, however, the only countries taking specific actions to prohibit deepfake deception are China and South Korea. In the United Kingdom, the law commission is currently reviewing existing laws for revenge porn with an eye to address different ways of creating deepfakes. However, the European Union doesn’t appear to see this as an imminent issue compared with other kinds of online misinformation.
So while the United States is leading the pack, there’s little evidence that the laws being put forward are enforceable or have the correct emphasis.
And while many research labs have developed novel ways to identify and detect manipulated videos—incorporating watermarks or a blockchain, for example—it’s hard to make deepfake detectors that are not immediately gamed in order to create more convincing deepfakes.
Still, tech companies are trying. Facebook recruited researchers from Berkeley, Oxford, and other institutions to build a deepfake detector and help it enforce its new ban. Twitter also made big changes to its policies, going one step further and reportedly planning ways to tag any deepfakes that are not removed outright. And YouTube reiterated in February that it will not allow deepfake videos related to the U.S. election, voting procedures, or the 2020 U.S. census.
But what about deepfakes outside these walled gardens? Two programs, called Reality Defender and Deeptrace, aim to keep deepfakes out of your life. Deeptrace works on an API that will act like a hybrid antivirus/spam filter, prescreening incoming media and diverting obvious manipulations to a quarantine zone, much like how Gmail automatically diverts spam before it reaches your inbox. Reality Defender, a platform under construction by the company AI Foundation, similarly hopes to tag and bag manipulated images and video before they can do any damage. “We think it’s really unfair to put the responsibility of authenticating media on the individual,” says Adjer.
The Tech Alert Newsletter
 

Cacheman

Veteran Member
This is probably why they keep delaying his Veep pick....

Screenshot_2020-08-02 (3) Dan Bongino ( dbongino) Twitter.png

I just told my wife yesterday after hearing and seeing Biden on news clips he's going as fast as what I saw in my Grandmother. In my Grandmother the effects accelerated rapidly when her Alzheimer's moved from mid stage to end stage and I see the same in Biden. Staying in the basement and away from social interaction is probably making it worse.

Can they hold him together until after the convention? If they kick him out now, they get Bernie-they really don’t want him.

Biden was always a placeholder anyways. That's why his campaign is so focused on the VP sweepstakes. It's like a hidden primary by super delegates to pick the actual candidate. Deep state has the plan and Biden just a pawn piece in the game.

Joe will never debate Trump since he'll be out after the convention and when the Bernie Bros get screwed this time it's very possible all hell breaks loose like we haven't seen yet in the cities.

*EDIT- Has anyone reminded the Dems that if he isn’t alive or physically capable of being sworn in, their VP doesn’t get it. The next highest Presidential candidate wins.
 
Last edited:

nadhob

Senior Member
You will never see those two on a debate stage together. If you did, it's over. Remember tho, the networks, regardless of what they say about Biden, are not talking to you, they are talking to their acolytes. 'Be calm, all is well, look how good Biden was...'
To go live as in the past, they, the media will want an incentive.... Maybe we should suggest it be a pay per view event?? I'm in....It would be worth the $30 bucks!!!
 

Hawkgirl_70

Veteran Member
This is probably why they keep delaying his Veep pick....

View attachment 212176

I just told my wife yesterday after hearing and seeing Biden on news clips he's going as fast as what I saw in my Grandmother. In my Grandmother the effects accelerated rapidly when her Alzheimer's moved from mid stage to end stage and I see the same in Biden. Staying in the basement and away from social interaction is probably making it worse.

Can they hold him together until after the convention? If they kick him out now, they get Bernie-they really don’t want him.

Biden was always a placeholder anyways. That's why his campaign is so focused on the VP sweepstakes. It's like a hidden primary by super delegates to pick the actual candidate. Deep state has the plan and Biden just a pawn piece in the game.

Joe will never debate Trump since he'll be out after the convention and when the Bernie Bros get screwed this time it's very possible all hell breaks loose like we haven't seen yet in the cities.

*EDIT- Has anyone reminded the Dems that if he isn’t alive or physically capable of being sworn in, their VP doesn’t get it. The next highest Presidential candidate wins.
Are you sure about that???
I’ve never heard this before.
 

Wildweasel

F-4 Phantoms Phorever
I just hope that Trump's campaign is ready for Biden to be trying to cheat at the debate and has plans to counter such efforts. In my mind I can see Biden wearing an earpiece, told to repeat whatever it tells him, happily answering a question on foreign policy by reciting "The Itsy, Bitsy Spider" as it comes over his earpiece from Trump's jamming system.
 

Squib

Veteran Member
Biden would wear a mask, be in a separate studio, surrounded by his staff, and the answers would sound like Biden’s voice, but in truth, would be a voice actor digitally made to sound like Biden.

Anything goes wrong? Opps! Technical difficulties!

With a masked face, it’d be easy to pull.
 
Top