Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 45
|
![]() |
Author |
|
Glen David Short
Senior Cruncher Joined: Nov 6, 2008 Post Count: 185 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Great news we can start the fight so soon after the virus started to spread... hopefully the scourge can be contained and defeated with our crunching help.
----------------------------------------![]() |
||
|
mgl_ALPerryman
FightAIDS@Home, GO Fight Against Malaria and OpenZika Scientist USA Joined: Aug 25, 2007 Post Count: 283 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Good evening KLiK,
Not all types of computations will receive a significant boost from re-writing the code to make it run efficiently on GPUs. The expected boost in performance has to justify the time and effort required to port the code to GPUs. Molecular mechanics-based code (such as MD simulations with AMBER or CHARMM, and BEDAM, I would assume) can get a huge boost in performance by running those calculations on GPUs. But I don't think it is likely to help AutoDock VINA calculations that much. However, I am not an expert on this. It would be better to ask this question to Professor Art Olson, of the FightAIDS@Home project. Best wishes, Dr. Alex L. Perryman |
||
|
mgl_ALPerryman
FightAIDS@Home, GO Fight Against Malaria and OpenZika Scientist USA Joined: Aug 25, 2007 Post Count: 283 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Good evening Rickjb,
Thank you very much! I'm glad that you're still contributing to World Community Grid projects. You raise an excellent point. Docking-based virtual screens generally do tend to have a large number of false positive (that is, many compounds that have good docking scores do not work well when they are then tested against the enzymes or cells involved in the disease). Similarly, many conventional, experimental ("wet lab") based High Throughput Screens also suffer from a large number of false positives (that is, compounds that seem to inhibit the target or the growth of the pathogenic cell, but that are later shown to cause that inhibition via non-specific, undesirable mechanisms that are likely to cause toxicity problems). Both approaches can also have problems with false negatives (i.e., they do not detect some of the good compounds that are likely to be decent, specific inhibitors). As we say in the lab, biology is messy (in more ways than one). Nature is very complicated, especially when you get to the in vivo stage (testing compounds in live organisms, such as mice or people). I am not familiar with the details of the DDDT project, in terms of how exactly they prepared the models of targets, which targets they docked compounds against, how they filtered those results to select compounds, and how they visually inspected those candidate compounds to pick the ones that were then tested in assays. I think DDDT used AutoDock, while we are using AutoDock VINA. They are entirely different programs (even though both were developed by Art Olson's lab), which use different search algorithms and different scoring functions. VINA tends to perform better than most other docking programs, especially in terms of the predicted binding modes. In my hands (and when many other people use it), VINA has performed well, in many different systems. Although the calculations are not perfectly accurate, VINA is super fast, and I tend to have good "hit rates" against many different types of targets (the hit rate = the percentage of real, active inhibitors, out of all of the candidate compounds that were selected and then tested in "wet lab" assays). But the "visual inspection" stage at the end (which is used to filter out some of the compounds that are likely to be false positives) can be more of an art than a science. Perhaps I'm an artist. ;) Some of my recent papers (including the SAMPL4 challenge or the GO FAM-based screen against InhA from Mycobacterium tuberculosis) explain those details, in terms of hit rates and how I select compounds for assays. Importantly, since joining the Freundlich lab, I have learned a lot about medicinal chemistry from Joel, regarding what types of compounds (or sub-structural regions within them) are more likely to be non-specific, deceptive compounds (called PAINs, for Pan Assay Interference compounds that inhibit many different types of targets by using undesirable mechanisms). In addition, Sean Ekins and Joel have taught me about how to create and apply machine learning models, which can also be used to help reduce the number of false positives and/or eliminate compounds that might be distractions / less helpful according to different key metrics and properties that a compound needs to pass or display to have a good chance of working in vivo. But if our new tools, workflows, and strategies don't pan out, then we'll consider combining BEDAM-based filtering with them in a second phase of the project. Don't expect failure, just because other projects didn't always work well. We have learned from our mistakes and the mistakes of others. I think we have a good chance of finding some new compounds that can give us a foothold or foundation, from which we can advance and expedite the discovery and development of a drug against Zika. But don't just take my word for it. The results will eventually speak for themselves. Best wishes, Dr. Alex L. Perryman |
||
|
Viktors
Former World Community Grid Tech Joined: Sep 20, 2004 Post Count: 653 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Before the researchers can work on getting rid of false positives, they have to get them first. So the VINA runs come first. It might be possible that BEDAM might be useful in eliminating false positives at a later time. BEDAM runs take way too long to be used for initial screening of compounds. Also, I think degue protein structures were used in some of the homology modeling for ZIKA. Dengue researchers are in fact collaborating behind the scenes with ZIKA researchers.
|
||
|
KLiK
Master Cruncher Croatia Joined: Nov 13, 2006 Post Count: 3108 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Good evening KLiK, Not all types of computations will receive a significant boost from re-writing the code to make it run efficiently on GPUs. The expected boost in performance has to justify the time and effort required to port the code to GPUs. Molecular mechanics-based code (such as MD simulations with AMBER or CHARMM, and BEDAM, I would assume) can get a huge boost in performance by running those calculations on GPUs. But I don't think it is likely to help AutoDock VINA calculations that much. However, I am not an expert on this. It would be better to ask this question to Professor Art Olson, of the FightAIDS@Home project. Best wishes, Dr. Alex L. Perryman dr, thank you for answer...if I got it right: it's upon current BEDAM project like FAHB here to develop GPU science software! we'll keep on crunching...use the data to cure people! ![]() |
||
|
|
![]() |