No votes yet


"In this webinar, Dr. Cowan discusses the following:
-What is disease?
-The International COVID Summit hosted by the European Parliament; what was discussed there
The link for this video can be found here: https://live.childrenshealthdefense.org/chd-tv/events/fluoride-report-or-systematic-review-of-the-science-or-may-4-or-12-30pm-et/fluoride-report-systematic-review-of-the-science-may-4/
- How did they come up with 6 million variants?


Source: Next Level  , Knowledge Rethought 

Assembly and Alignment:

Do not confuse the two mathematical steps, such as assembly and alignment.
1. Sequencing NGS (short sequence sections approx. 150 nt)
2. Assembly (overlaps)
3. Optional alignment using a "template"
The Chinese generated a total of 56,565,928 sequence reads x 150 nt length
So 8,484,889,200 nucleotides
It simply uses the sequences that are believed to be viral.
All other publications use the given genome of the Chinese and use specific primers (see e.g. RKI paper), where they use an armada of such specific (adapted to the given sequences of the Chinese) primers to find the sequences they need and to continue working with them.
They basically cheat themselves, they find what they are specifically looking for and then use the many software programs to just create overlaps with selected sequences, then optionally align them. So-called gap-filling programs are also used, which use (invent) missing sequences in order to comply with quality rules.
However, the result is never exactly the same; logically, deviations are generated in the genome construction process due to the fact that different sequences are always found with each sequencing. These are then sold in the thought trap as mutants...
we showed two ways to check the data, one was with the galaxyserver and gave a result after assembly of 29,802 nt.
Chinese (Publication) after Assembler Megahit (Without Alignment) = 30,474 nt
Galaxyserver after assembler megahit (same raw data and parameters without alignment) = 29,802 nt
difference = 672 nt
already the process before the alignment, i.e. the assembly already yields different results.
Besides attempting to replicate the assembly published in the Chinese publication with the published sequence readings, we considered a simple protocol for analyzing the internal structure of large datasets of short sequence readings. With the available sequence data, we were able to calculate consensus sequences for the reference genomes LC312715.1 (HIV) and NC_001653.2 (hepatitis delta) with a higher quality than the reference sequences we considered associated with coronaviruses. This also applies in particular to bat-SL-CoVZC45 (GenBank: MG772933.1), which led to the original hypothesis of SARS-CoV-2.
One more thing: As long as biochemists and virologists remain silent about the dozens of increasing error rates in the "construction" of viral genomes, which ALWAYS occur in the conversion of RNA into cDNA, in PCR and in sequencing, it MUST be assumed that those involved know that the sequences generated with it are artificially produced. Only those short pieces that are also found in humans are used to generate "positivity" using selective PCR. By suppressing what is taken for granted, by naming the limits of the techniques used, those involved cover up their scientific fraud. Without the exact naming of all error rates of all steps of all techniques used, ALL claims of the virologists are OBVIOUS scientific fraud. Every physicist signs it immediately.
This is the be-all and end-all for every measurement technician. Each measurement technique has only a very narrow range in which it is valid. Below and above it goes, like the virologists, haywire.