Project Veritas and Pfizer - A misunderstanding on both sides

No votes yet

Project Veritas and Pfizer - A misunderstanding on both sides


What it is about: Secret sound recording of Project Veritas: Pfizer explores "mutation" of COVID-19 virus for new vaccines through "directed evolution"


The layman does not understand or know the genome sequencing process and cannot see through the unproven claims.


The specialist in virology & bioinformatics blindly trusts his complex algorithms, which significantly influence the process of genome construction. The basis itself and the questioning of the basis remain untouched by the department.


The truth is: In each sequencing process, there are differences within the sequence, which are sold to the general public as mutations. In reality, laboratories do not work with "viruses", but mix genetic material from different sources, poison and/or kill laboratory animals injected with this poisonous cocktail, and then analyse the resulting mixture with the help of computers so they can claim that the generated model is a new creation. Virologists are not able to sequence exactly the same "virus" every single time!


To put it bluntly, every new genome sequencing of that mixed genetic material, which consists of different sources, is an alleged mutation. An identical genome sequencing, even with the same sampling, is impossible.


Not only can the genome construction process be manipulated through various parameter settings, but it is actually an incredible self-deception.


INFO: In our first issue of the NL MAGAZINE we described the genome construction process and its weaknesses in detail.


Do you have technical questions? Then contact 

-NL Contact: @WissenNeuGedacht



-Project Veritas

Project Veritas:


  • What is alignment in virology



Construction of a genome.

the genome sequencing process involves several problems. 
a "genome" cannot be done with a sequence reading, hence the different approaches:
- Sanger 
- shotgun 
- (NGS etc.) 
- Nanopore
Basically, they all have one thing in common: the sequence reads are short: Sanger somewhat larger chunks of 1000 bases and NGS usually 75 - 150 bases long sequence reads.
Nanopore: this is the next hype. Here the preparation is very complex and I think that new artefacts will emerge. For example, it is claimed that the error rate of Nanopote is about 10%.
On the one hand, there is no isolated structure, which makes it difficult to associate from which source the short sequence reads originate. 
Complex mathematical algorithms are used to search for overlaps (contigs) from the many very short gene sequences read (reads) using assemblers. 
Different assemblers, such as Trinity & Megahit, produce completely different contig results based on exactly the same data (the previously read gene sequences). Both in the number itself and in the length. This already happens with very small genomes, such as the SARS-CoV-2 is said to possess. 
Even within an assembler, the different parameter settings can lead to completely different results, which again is surprising because it shows that they are merely constructs and not genomes recognised exactly from reality. 
Each time a sample is sequenced, differences in the sequence readings occur, this is called by different names, some call it mutation (error during copying), others assume incorrect reading processes. Other possibilities arise from the design of the experiment itself. 
It is a fact, however, that the idea that many lay people have of sequencing a genome in one piece and knowing where the beginning and the end are is a false one. 
Therefore, it is important to understand how a genome is constructed, that it is a kind of puzzle. 
De Novo assembly involves working without a template and running the assemblers to construct one long sequence read out of many short ones. Here, too, it must be clear to everyone that the exact same genome can never be assembled with every sequencing. There are always differences in the base sequences. 
If there is a template and this is used, we speak of an alignment (optional step), i.e. an alignment, a template with which the read gene sequences can and should be aligned. This leads to the software aligning the read short sequences to the template. In the process, gaps are artificially filled, etc. 
The alignment thus represents a further manipulation. 
It is also a fact that with enough sample material, in principle with the techniques and the primers (PCR) that are used, any x- arbitrary genome can be constructed. 
As for the claim of the double helix, this is merely a concept that has never been scientifically proven. For that, it is enough to read through the work of the people responsible. 
About DNA itself:
- according to scientific findings, the physical and molecular structure of DNA is fragile and sensitive and can easily be damaged by heat, chemicals and radiation.
(Look at what chemicals, radiation and heat are used to determine sequences and more).
- DNA is constantly changing, it is not a fixed blueprint and even the concept of genes (what is a gene) has long since disintegrated. 
- It is lacking at all corners, both in the DNA itself in important control experiments, and in the genome construction process.
It is difficult to understand why scientists, biologists and chemists believe that studying dead tissue treated with chemicals and applying mathematical models will lead to any discoveries. What makes them think that they are dealing with one new substance and not another?
What molecular biologists and biochemists call isolation is actually the identification and documentation of by-products that arise after chemicals and a certain type of heat are applied to biological material
They compare the by-products formed with the by-products of the previously "isolated" substance, and if the identified and documented by-products do not match in quantity and composition with those already documented, they call it a new substance.
This applies not only to DNA, but also to various types of proteins, vitamins, RNA, etc.
All the best