It would be nice if NVIDIA donated compute time on one of their large in-house machines the DGX SuperPOD [1] or the DGX SAturn V [2] (#20 and #67 on the last Top500, resp.) for COVID-19 research. Running simulations in a data-center is far more efficient.
Sure, agreed. But Folding@Home is setup for home, and both of those machines put together add up to the equivalent of maybe 1k-2k gaming rigs. Putting aside looking for the maximum possible efficiency, crowd sourcing has the potential to scale to many orders of magnitude larger than what can be done in a data center.
Nothing prevents running the fah client on nodes of a compute cluster -- in fact my colleagues did that (while running a local F@H server), though that was a number of years ago just because they wanted take advantage of the distributed computing facilities provided by the client-server setup and built-in algorithms.
> crowd sourcing has the potential to scale to many orders of magnitude larger than what can be done in a data center.
Potential it does have, but I am skeptical of the "many orders of magnitude" claim ever having a chance to materialize.
I'd love to see a cost / benefit analysis on the effective amount of useful work contributed vs the cost of the same in a data center.
“On September 26, 2001, SETI@home had performed a total of 1021 floating point operations. It was acknowledged by the 2008 edition of the Guinness World Records as the largest computation in history.[22] With over 145,000 active computers in the system (1.4 million total) in 233 countries, as of 23 June 2013, SETI@home had the ability to compute over 668 teraFLOPS.[23] For comparison, the Tianhe-2 computer, which as of 23 June 2013 was the world's fastest supercomputer, was able to compute 33.86 petaFLOPS (approximately 50 times greater).”
"On September 26, 2001, SETI@home had performed a total of 1021 floating point operations."
Just to clarify, I think HN formatting ate a caret here (now dang can see in the dark!) and it's supposed to be "10 to the 21"; either that or floating point math is much harder than I remember.
This is 2013 with most of those computers likely even older than 2013 and likely single core vs 3,120,000 brand new cores on the supercomputer. So, in terms of “raw” flops it’s just a question of more and newer hardware on the supercomputer not really better architecture.
Good point! I guess handwavy-napkin-math that looks like roughly ~2x more flops/core for the Tianhe-2, which is unsurprising for the latest supercomputer vs rando home computer. Maybe even surprising the number isn’t higher...
It’s probably worth noting that Seti@home has partitioned the problem space so that it’s “embarrassingly parallel”, as they say. I think raw flops are a good metric in this case, where raw flops is a very bad metric for some problems.
We're talking about Folding@Home not any (or all) distributed computing project(s). I've not seen recent stats on the size of the FAH network, so not sure how does it compare to large supercomputer resources, but plots available through an easy google search show that in the past it was never even larger than the biggest machines on the TOP500, let alone multi-oom larger.
(Also, AFAIK garbage flops also count. As far as I recall talkin to a researcher working with FAH a qhile ago, a large-ish fraction of results returned were not usable due to data/files being broken.)
> We’re talking about Folding@Home not any (or all) distributed computing project(s).
Oh, that wasn’t really clear to me. Anyway, there is a close relationship between the SETI project and the Folding project, so this doesn’t seem like a weird stretch to me to compare them. Whatever SETI@home has achieved is evidence for what Folding@home might achieve.
> plots available through an easy google search show that in the past it was never even larger than the biggest machines on the TOP500
But that has little bearing on either what’s possible in the future, nor why they aren’t scheduling compute time on Summit, right?
That’s always true in these peak performance measurements, both for distributed projects and for supercomputers too. Also true for CPU & GPU peak flops specs.
We can’t actually know how much ‘useful’ work is being done in any case, that depends on all kinds of things like what kind of problem is being solved, how data-parallel the problem is, how well the problem is even understood, what algorithms are being used, what bottlenecks there are on data & IO, whether the implementers used python or CUDA.
I just don’t see any reason why Folding@Home would or should refrain from distributed crowd computing just because supercomputers exist. They can both happen. We don’t need to try to come up with efficiency numbers or compare utility/flop in order to see that Folding@Home is producing some useful research results, right?
Maybe it’s worth pointing out that compute time on TOP500 supercomputers is not free, and Folding@Home is not a job you can run for 1 day or 1 week and be done. Nvidia or IBM could donate some time, but it won’t finish the project, so it might make no sense to donate supercomputer time, just like it makes no sense for Folding@Home to seek out or purchase supercomputer time.
> But that has little bearing on either what’s possible in the future,
There is certainly a chance that 10-100x more flops will appear on the fah network, but as I said I just don't believe it will happen.
> nor why they aren’t scheduling compute time on Summit, right?
They are. The researchers who get to use the FAH network certainly do have access to traditional supercomputing resources too. Some types of problems require strong scaling (and reliable resources) which requires HPC iron.
> > Also AFAIK garbage flops also count.
> That’s always true in these peak performance measurements, both for distributed projects and for supercomputers too. Also true for CPU & GPU peak flops specs.
I think you're misunderstanding me: I literally meant that the data files returned by the FAH contributors may often be garbage due to data corruption (OC'd cards, no ECC, overheating hardware, poor storage etc.). Not sure to what extent is this still the case today.
> I just don’t see any reason why Folding@Home would or should refrain from distributed crowd computing just because supercomputers exist. They can both happen. We don’t need to try to come up with efficiency numbers or compare utility/flop in order to see that Folding@Home is producing some useful research results, right?
In principle you're right. There are some questions around it, though. Briefly: access is a privilege of few; oversight?; inefficient use of hardware (low Flops/W), just to name a few.
> Maybe it’s worth pointing out that compute time on TOP500 supercomputers is not free,
No, it is not, but access is granted by grant agencies with at least some transparency and oversight as well as scientific review; also, those machine are far more efficient (flops/w).
> and Folding@Home is not a job you can run for 1 day or 1 week and be done.
No, Folding@Home is not the "job"; a "job" at least in HPC sense is a molecular dynamics simulation (or many); a set of such jobs is what is typically required for a project to be completed result of which would end up in a publication. The computational work corresponding to such a project/paper, depending on the size of the machine, can in fact be run in 1 week, and on a large machine even in 1 day.
On a second thought, projects like the Exscalate4CoV [1] might have a chance to contribute even short-term as they do have the explciit goal to also focus on "Identify virtually and quickly the drugs available, or at an advanced stage of development, potentially effective"
These projects are cool and do contribute to our understanding, but they are not part of the critical path of fighting this pandemic.
The vaccine(s) are already in phase one clinical trial testing, as are some important anti-viral treatments which are in phase three clinical trial testing.
There is no potential output from this folding project that can accelerate those timelines.
Sure, I'm aware that this is fundamental research (likely not on actual protein folding, but haven't looked at the details), which will probably not have short-term outcomes, not on the timescales required for a 1st gen vaccine (perhaps not even 2nd gen).
That does however not take away from my main point, if anything it is in support of what I was trying to emphasize: such computational research is better suited for a compute cluster.
I guess my point was that corporations devoting large sets of their compute clusters to partake in activity that has no potential to help the current pandemic, isn't going to make much business sense.
It's a nice thought from a community manager but I just checked Nvidia's GeForce Now game streaming service where I pay $5 a month to get access to 6 hour sessions and a ray tracing GPU and it's not available. Seems like a better headline would be "Reddit user reminds PC gamers that Folding@Home exists. Nvidia's twitter account agrees."
You would think. And where is AWS? Spare cycles for humanity?
Its like driving through some burger joint where they ask you to spare change for the needy... since you know, you have so much money, and the burger joint can not possibly afford it.
Is it likely that a large cloud company has more spare capacity than the sum of all home GPU owners? I'd have thought a cloud company would optimise for the maximum possible utilisation of their infrastructure, whereas most gaming GPUs are probably off 99% of the time.
Extra capacity in a compute center equates to less power being used. That power is the largest running cost. Maybe they maxed out their tax writeoffs idk.
They have an announcement on their web site[1] and on twitter[2].
Also the web client shows information about the executing tasks (e.g. [3][4]). The COVID19 tasks are prioritized and will be executed when you keep the target settings of the client at default ("Any").
During my decade long stint working in labs that worked more closer to actual therapies, I've never once referred to a mol.dyn paper or needed those results. Crystallography, sure. Super useful. Mol Dynamics or folding experiments not exactly.
Which is fine for basic science, I'm sure there's a possibility some of these folding experiments will lead to insights of actual use in the long-term. However folding@home et al disingenuously portray this immediate effect on curing aids or the latest scare, which I doubt is truly that useful.
I'd argue that this simulation research is as useful for covid19 therapeutic research as banning plastic straws is to solve global warming.
My colleagues run numerous projects that transfer knowledge between computational research, including molecular simulations, and wetlab research as a routine. Computational and wetlab experiments are reguarly designed to complement each-other.
Examples of numerous bio-phyiscs/bio-chemistry research done around me (ranging from research on ion channel dynamics [1] to drug permeability through the skin barrier [2,3], or cryo-em structure refinement just to name a few) strongly contradicts your experience as well as your final statement -- and than we've not even mentioned the drug binding affinity computations which is these days routinely done using molecular dynamics by both academia a pharma industry.
It’s hard to tell if that’s a misinformed jab at plastic straw bans or you’re deliberately drawing attention to the misconception that the ban is due to global warming.
Either way, in a time of misinformation it’s worth clarifying:
“The main reason cited for eliminating plastic straws is their negative impact on our oceans and marine wildlife. Plastic in the ocean is a huge problem — look no further than trash island, or the viral video of a turtle suffering as a result of ocean pollution, to understand that. But of all the plastic that ends up in the ocean, straws make up only four percent of that waste.”
I know this is usually the case (having worked a few years with molecular dynamics) but would be really interesting to get your opinion on what could be done better, where the bottlenecks are (tools or knowledge or just CPU-power) etc. Would you mind sharing some thoughts here or get in touch for some chatting?
Disclaimer - I probably did work somewhat far from protein folding research, but still could be considered a "potential consumer" for such research since projects in my lab for example tried to make a therapeutic protein better by structural modification based on structures published by others.
I will say that there's nothing wrong with continuing the folding research that's being done. We have a ways to go before purely predicted protein structures (as opposed to structures from crystallography or EM) become trustworthy enough to make research decisions based off of. But we can't get there if we don't continue the basic research! In that sense I'm all for these studies. A lab I always appreciate (purely from perceived merit of the papers I see continuously from) is that of David Baker. They seem to make meaningful strides towards making synthetic protein folding more powerful as a tool among other projects.
However what I see as problems are two things in this field:
1. Definitely have seen papers where they'll try to do some molecular Dynamics simulations on some protein related to a disease but it's not clear if it's meaningful to anyone - people actually working on the disease therapy probably weren't even asking for these insights if there were even any; these low-hanging-fruit studies also don't aid the progress of folding and Dynamics research either, because they were just applying standard techniques to a problem domain. Often times these are just cash grabs by researchers who have specialized in the mol.dyn fields and end up writing grants to disease research agencies allegedly in aid of curing the disease but practically not. I know this because my lab did this blatantly. When anthrax was a real threat we got funds for anthrax related research, then we shifted to MS, now cancer. I wouldn't be surprised if they suddenly write grants for corona virus related research! This goes on in all these fields.
2. I argue that folding@home keep advertising misleadingly that they're gonna help cure aids or corona or whatever. I don't think they are, not directly. In a very vague way this feels more like war profiteering to me - these experts are exploiting the current fears to canvas their projects (even design them around this) and giving what is closer to false hope than anything that people are contributing towards end goals of curing disease. I feel like they should be honest about it - this is more basic research than anything else, so they should just say they want computational power for basic science. If people still choose to give their resources, great. Otherwise at least you live and die with dignity.
Anyways this is of course a rant from a PhD who got sick of the system and left, and someone with only an associative knowledge of this particular field. Hence these thoughts should be taken with a grain of salt!
Thanks for the insights! One thing though, protein folding is just one tiny bit of molecular dynamics research (and not really crucial at that often, due to the rise of high res cryo EM and good xray crystallography), so I would hope there are other uses for md.. I studied protein "mechanical" function/dynamics itself, and its manipulation by ligands, for example, and would hope that stuff like that would eventually seem useful to pharma companies.
But yeah, the md crowd should have tighter communication with the therapeutics crowd!
BTW just because you switch around your research-targets when funding changes doesn't imply that the research is useless. For example right now there is a lot of money funnelled into corona-research (for obvious reasons).
I do agree it's difficult to formulate how to ask for donations (money or cpu-time), you need to seem optimistic otherwise nobody would do it but research takes a huge amount of money and cpu-time and calendar time so there surely is a gap here between expectations and reality..
> I do agree it's difficult to formulate how to ask for donations (money or cpu-time), you need to seem optimistic otherwise nobody would do it but research takes a huge amount of money and cpu-time and calendar time so there surely is a gap here between expectations and reality..
And there is a tricky balance between over-promising and under-delivering vs being realistic and lacking enough "flashyness" to get funded/thrown money at.
How do you think science work? It's like losing weight and gaining muscle, you have to work hard at it and there is no moment in time where you can say that that's the moment you lost weight or gained muscle. It is all incremental progress.
Still, would be nice to be able to point to some tangible result, such as "knowledge obtained eventually led to developing of better treatment for bone cancer" etc.
Doesn't even have to be that. It would even be interesting to know what has been figured out based on the folding even if it hasn't led to any improvements in medical treatments.
> Or it's like a huge citation rings and mutual salary justifications?
If you have absolutely nothing to add to the discussion, please consider not saying anything at all. You're just adding noise to an otherwise interesting discussion.
At least NVidia is doing something. You? You're just wasting everyone's time with your trolling.
I left academia after wasting all my twenties believing there's more to it than citation rings. And maybe there is a small amount more. But my conclusion is that in general citation rings (I typically use a more colloquial Reddit phrase involving geometric shapes and descriptions of motion) and ego stroking is the main purpose of academia. Any scientific progress is incidental or often merely accidental inspite of people trying not to move the needle for personal gain.
"At least Nvidia is doing something" - like what exactly? Sounds more like a PR move than anything. They can have a more tangible effect donating money for actual efforts to help people probably.
Whenever I said "something is better than nothing" my dad would immediately reply, "but nothing is better than nonsense." He was a smart man indeed.
> I left academia after wasting all my twenties believing there's more to it than citation rings. And maybe there is a small amount more.
I also worked quite a few years in academia, including participating in a few international research programs, and although the indentured servitude facet is indeed the problem and the "publish or perish" pressure puts an emphasis on an aspect that is not very relevant, I have to say you're completely overblowing and misrepresenting stuff that has no meaning or consequence. If you ever had a career in academia you'd be clearly aware that the only thing that counts is whether your paper gets approved. Citation volumes are entirely meaningless beyond the vanity aspect of it all, as it only has an impact in main path analysis, and only if you use an algorithm that weighs edges wrt connectivity, which is a bit meaningless. Thus you're complaining about something that at most has an impact on your search results, and only if you are a competent researcher who doesn't exercise any independent reasoning, which is absurd. Thus, what leads you to believe that your concerns are meaningful or relevant to the discussion?
It feels like you tried desperately to pull an appeal to authority that has no substance or merit, quite frankly, and in a way that puts your credibility into question.
You've probably not heard of the RCR, or the relative citation ratio, that the NIH has instituted in recent years then?
Anyways, by citation rings I mean more than about the h-index or just the citation counts. It also includes cliquey things such as who cites who, who's a friendly reviewer and who's not, how do you continue to keep publishing in a way that makes you look like the expert more than finding out the truth, etc. But seems like you're an expert who knows better so I'll leave this conversation here.
Just drop it. If you actually had any experience in academia you'd be aware that citation counts matter zero beyond vanity. What matters is if you are able to get published on reference journals, and in some cases in a pre-approved list of publications. That's it. Just give it a rest.
Thanks for your honesty. I'm not from academia but I have made similar conclusions in my own area. I'm trying to find what I can pass on to my children to save them some time, but it's awfully hard.
"Doing something" can be worse than doing nothing. OP may be posting with an agenda, but onlookers on the fence about running up their electric bill will actually want to know that it's for some real benefit and not a counterproductive show of solidarity (displays of solidarity do have their place, don't get me wrong).
The poster make a good point, bunch of papers is not actual progress, huge citation rings and mutual salary justifications is a big issue with these papers. An actual progress probably would be something like : effective cure for a disease found.
That all depends on the content of the papers. Protein folding is immensely complex.
An actual progress probably would be something like : effective cure for a disease found.
This is such a silly way to frame things. Often the reason you don't know a cure for a disease is because you aren't even really sure what causes the, or even if the disease has just one cause. You might imagine we're waiting for a "Cure for Autism" but what if we eventually discover that there are 20 different conditions that might manifest as autism and each one needs a separate treatment? Do you have the patience for that or will you just throw your hands in the air and complain about no progress?
There has been lots of progress in therapies and treatments for diseases, much of it hampered by the fact that protein interaction and biological pathways are so complex and there's so much we don't understand. So before we can "find a cure" we need to understand how the systems work. For example, in 2009 a drug called Cetuximab was approved to treat colorectal cancer, but ONLY in patients without a specific genetic mutation (KRAS) that renders the drug ineffective. That knowledge comes from understanding cancer genomics which requires a lot of statistics and computational analysis.
I'm not saying FAH is a dramatic world-changing part of this progress but it's clear that there is a LOT we don't know and finding cures for many diseases is going to take a lot of incremental progress that doesn't yield anything obvious immediately.
Papers are where researchers report on their progress by presenting specific findings to the world. Confusing papers for science is like confusing the scoreboard for the sports.
Maybe someone's already done this and I haven't noticed, but having Docker and/or Flatpack images for this seems like an obvious way to include as many machines as possible.
You don't need the fahcontrol package. The COVID-19 work units are high-priority and you're currently very likely to get them, but they haven't halted work on the other ones.
Upate: Service still fails to start but I just ran FAHClient manually again and it connected and is now processing a job apparently, so must have been a temporary glitch. Cores and GPUs at full throttle according to htop. Don't see anything indicating whether what it's doing is related to COVID-19, so blindly hoping it's helping with something, but I'll leave it on. (Says: "project 14307"??)
---- original post ------
I was just trying to get it running on Ubuntu 18.04 in response to this post but I'm just getting a lot of errors with not very useful error messages. Any ideas?
When I run FAHClient manually I get, 21:04:10:WARNING:WU01:FS01:Failed to get assignment from 'X.X.X.X:8080': No WUs available for this configuration
(I replaced the IP address with Xs)
When I try starting the service I get an error and journalctl -xe gives me, e.g., (unedited, it actually says "result is RESULT")
-- Unit FAHClient.service has begun starting up.
Mar 14 22:02:27 poole FAHClient[17956]: Starting fahclient ... FAIL
Mar 14 22:02:27 poole systemd[1]: FAHClient.service: Control process exited, code=exited st
Mar 14 22:02:27 poole systemd[1]: FAHClient.service: Failed with result 'exit-code'.
Mar 14 22:02:27 poole systemd[1]: Failed to start LSB: Folding@home Client.
For a more user-friendly UI, you can also go to https://client.foldingathome.org/, which will then attempt to communicate with your client on port 7396 and display the project description among other things. (This also lets you change some settings)
For those running on servers, you can telnet 127.0.0.1 36330 to control your client.
If you just want the project numbers quickly, you can do grep project /var/lib/fahclient/log.txt
You should post your problem in the F@H forum. Back when I last did any serious folding stuff 10 years ago, people on the forum (some of whom still seem to be pretty active today) were immensely helpful with these kinds of problems. https://foldingforum.org/
It’s been a while since I worked in protein folding and tried folding@home, but IIRC the difference between GPUs and CPUs was at about the scale we see in machine learning, something like 20x to 200x.
That makes the usefulness of CPUs for this purpose questionable—you might be contributing more to that other disaster facing humanity than alleviating the current one.
[Disclaimer: How do I know this stuff?
I'm a core developer of GROMACS, one of the major molecular dynamics simulation FOSS community codes. Molecular dynamic is the algorithm/method behind the F@H simulations.
I work as a researcher in high performance computing and have (co-)developed many of the parallel algorithms and GPU acceleration in GROMACS.]
20-200x is simply not true. Typically, such numbers are a result of comparing unoptimized CPU code to moderately or well-optimized GPU code which is often misleading. (Such differences are however perfectly reasonable when comparing hardware-accelerated workloads like ML/DL). If you compare actually well-optimized codes, you'll see more like ~4-5x difference in performance for FLOP/instrction-bound code as it is the case for well-optimized molecular dynamics.
Case in point, I recently pointed out the huge difference in CPU performance of two of the top molecular simulation codes, one of which is 8-10x faster on CPUs than the other, solving the same problem [2].
F@H relies on GROMACS as a CPU engine [1] which happens to be the same code as I quoted above as the fast one. The trouble is that F@H has not updated their CPU engine for many years and distribute CPU binaries which lack crucial SIMD optimizations to allow making use of AVX2/AVX512 on modern x86 CPUs as well as the years of algorithmic improvement and code optimization we made. These two factors combined lead to _significantly_ lower F@H CPU performance compared to what they had we're they using a recent GROMACS engine.
Consequently, due to the combination of an inherent performance advantage of GPUs and the severely outdated CPU engine, it is indeed not worth wasting energy with running F@H on CPUs.
Edit 1: adjusted wording to reflect that the performance difference between running outdated GROMACS version and subotimal SIMD optimizations on modern hw can have a range of performance difference, depending on hardware and inputs.
Edit 2: fixed typo + formatting.
To that question, assuming by "anyone" here you are asking about other donate-your-cycles-distributed-computing-projects:
I am not too familiar with how well-optimized the codes of different @home projects are.
Taking a few steps back, perhaps the efficiency of these codes is the lesser issue and to be honest, in some (many?) cases other forms of donation/contribution may further more scientific progress than simply crunching numbers on one's home PC.
Totally, but if the work done @home is useful, donating compute time makes economical sense I think.
If I'm willing to donate $10 I can either donate money and it may be used to buy $10 worth of compute, with should cover all costs including the hardware and administration.
Or I can donate $10 worth of pure electricity and the other marginals I cover for no or a very small extra cost, since I already own the hardware for other purposes which it's temporarily not used for.
In the latter case the value of my $10 is higher, I theorize. Again, given that the @home project is truly useful.
Unfortunately typically there is no way to directly donate funds (especially not $10, even if there are 10000 people who'd do that on a monthly basis) to some research (group) of one's choosing. Therefore the choice is whether to donate to an @home project and trust that what they do is meaningful and the donated resources are used in a responsible manner.
In that respect, the responsibility of whether to ask for and how to make good use of donations lies solely on the teams that receive the donation. Without oversight it would however be foolish of them to be overly critical on their own shortcomings as there is a great benefit to having these cheap FLOPS (and good PR) that F@H brings.
I was about to suggest that it would be great to set up a merit-based funding scheme somewhat akin to the governamental funding agencies, but one run independently by the "council of the people". I'm however uncertain how effective could such an organization be at awarding the funding in a responsible and effective manner.
> When donating money or compute time, I want my donation to be used efficiently and not be wasted by high overheads.
Fair point. I think this is something you should bring up with the authors of Folding@Home. I do not work on that project.
My personal view on this is that there is always a cost/benefit balance that one has to strike which is often tricky especially given the considerably constrained resources as it is typically the case in academic computational/simulation tool development.
It is however, as you point out, a great responsibility of the researchers and developers of codes to make sure that choices made and action taken (or not) do not lead to disproportionate waste of resources donated by volunteers or awarded through a grant by research funding agencies.
I do not know the detailed reasons why F@H chose to not update CPU "core" since FahCore_a7 (AFAIK based one GROMACS code from 2014), but it is likely related to the aforementioned cost/benefit analysis done in their team.
One of the motivations could have been (just hypothesizing) that the software engineering efforts estimated to be required to update FahCore for CPUs (which generate a very small fraction of the "points") would have taken away resources from the GPU FahCore.
My 3900x/2070s system, if using both cpu+gpu for folding full, uses 430w at the wall plug. Even at 21cents per kw/h this is about 9 cents/h.
When setting up your machine, it's best to go to slots and remove your cpu. GPU tends to make all the points, especially if you have a newer GPU. I removed my 12 core 3900x from slots and barely noticed a drop in points while my system was running much cooler and with even less power usage. Leave it to default disease and you'll likely get covid-19 projects.
I use both my 2070s cards and I get estimated 3.2M ppd for $2/24hrs of electricity.
TLDR: PC's don't use much power. Get folding@home and just use your GPU(s) at full. Leave it to default to get covid-19 projects."
Just as with crypto mining, I'd recommend to set up a new Folding profile on your GPU settings. You want to be controlling GPU power limits eg reduce it by a bit and adding a more aggressive fan setting.
I applied this strategy when bitcoin mining on my 1080ti which I got shortly after release. I was aiming for sub 60c temps. The results paid for the GPU, and I didn't fry it either - still my daily driver today.
(The 1080TI is / was such a beast, with exceptional value for money.)
Same problem on my GTX 1050 (just tried updating drivers, no change). However, my laptop's GTX460M has pulled down project 11745 and is folding away.
EDIT after fiddling with gpu-index, cuda-index, and opencl-index (had to manually set them to point to my GTX 1050 [also have a BARTS card that is unsupported]) I was able to get a WU to download :)
I want to use the GTX1050, so I set gpu-index to 1. Only 1 CUDA device so I set cuda-index to 0. To use the OpenCL device for the GTX, I looked at the bus # (bus 2), found OpenCL device on bus 2 (device 0), so I set opengl-index to 0. Not sure if this is the right way, but it worked for me.
Assuming your question is in good faith, the answer is: when you rent something on AWS, you are paying for total cost of ownership sliced out. When you run something on your gaming rig GPU, you already paid for the machine/GPU - now you are "just" paying for electricity and depreciation. So if you have a gaming rig sitting around while you are writing a web app or whatever, let it do some science for just the cost of electricity.
This is cool because it can help quickly determine structures if the virus mutates. However, the obsession with structural biology isn’t necessarily optimal, because we’re talking about shapes of things (analog, constantly changing) instead of discrete states of molecular circuits. Check out the book “wetware: a computer in every living cell” —- cells figured out logic long before we did. Treatments which operate from a structural paradigm, like small molecules, antibodies, and vaccines, are analog. Treatments which operate from a sequence / switching / circuit paradigm are digital.
The digital solution is stronger: delete Coronavirus RNA. This isn’t dependent on shape of some molecule and how some other molecule docks with that inside a molecular blender. Genome engineering targets sequences directly, works with novel viruses right away, tolerates mutations with wobble base pairs, works in immunocomprimised patients, has potentially zero side effects, etc etc
I applaud these efforts but implore you to consider: why throw a fancy molecular wrench into the virus machine when you can instead set the “virus switch” to “off?”
If anyone’s gonna grok the benefit of digital over analog, it ought to be the Hacker News community. The hardest part is delivery, we need to affordably mass produce nano particles with the CRISPR DNA inside. That’s the number one thing holding back biotech from curing many diseases including this one: AAV is the standard vector but it only works once before you’re immune to the gene therapy.
If you can help mass produce nanoparticles with microfluidics or self assembly please email me bion@bitpharma.com
Folding At Home is cool. Is folding and shape-medicine the best way to cure human disease? How do we know the answer to that unless we try to program cells like we program computers?
First, that's not how any of this works, and second, said gaming PC owners wouldn't take part in this experiment to save themselves first and foremost but rather to help save as many other people as possible.
Corporations are just a way to organize people doing things that people demand.
Yes, the people who are developing vaccines generally have jobs, you would not learn their craft out of the pure goodness of your heart, why should they?
If you want to help somebody develop and test a DNA vaccine for SARS-2-COV without FDA approval or perfect medical ethics, there is an option for you as well: https://youtu.be/7BTvVnOgc10
[1] https://www.top500.org/system/179691 [2] https://www.top500.org/system/178928