Bitcoinbon — Bitcoin ː Ether ː Litecoin ː Dash ː HEROcoin ...

FutureBit Moonlander2 Scrypt for Boinc?

I'm new to Boinc but I've been reading up on it off and on for a few years. I would like to connect to boinc and work from a dedicated pc powered by a solar panel. I have heard that the USB protocol is just not fast enough to do any kind of Boinc or mining....but then I see this beast come out.
https://www.newegg.com/Product/Product.aspx?Item=9SIAGAE74B3404&ignorebbr=1&nm_mc=KNC-GoogleMKP-PC&cm_mmc=KNC-GoogleMKP-PC-_-pla-_-Accessories+-+USB-_-9SIAGAE74B3404&gclid=EAIaIQobChMIvr7TiuKh3QIVmoqzCh0QGQvmEAQYBCABEgK1-PD_BwE&gclsrc=aw.ds
I see this is for Scrypt-based bitcoins...Boinc is neither Scrypt or bitcoin. How hard would it be to use this device for Boinc? I am willing to put a lot of time into this project, and learn as much as I can.
I am considering plugging the Moonlander 2 into a Raspberry Pi but intuition tells me that is just plain wrong. I have a 2005 lenovo PC, I could possibly strip it down just so it can network to boinc and run the Moonlander...that makes me worry about power consumption though.
The solar panel puts out about 250W
Any help is appreciated
submitted by bottlerocket_inorbit to BOINC [link] [comments]

03-24 18:49 - 'New ICO: Society! Donate CPU time today. boinc.bakerlab.org' (i.redd.it) by /u/mcandre removed from /r/Bitcoin within 110-120min

New ICO: Society! Donate CPU time today. boinc.bakerlab.org
Go1dfish undelete link
unreddit undelete link
Author: mcandre
submitted by removalbot to removalbot [link] [comments]

Preventing double-spends is an "embarrassingly parallel" massive search problem - like Google, [email protected], [email protected], or PrimeGrid. BUIP024 "address sharding" is similar to Google's MapReduce & Berkeley's BOINC grid computing - "divide-and-conquer" providing unlimited on-chain scaling for Bitcoin.

TL;DR: Like all other successful projects involving "embarrassingly parallel" search problems in massive search spaces, Bitcoin can and should - and inevitably will - move to a distributed computing paradigm based on successful "sharding" architectures such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture) - which use simple mathematical "decompose" and "recompose" operations to break big problems into tiny pieces, providing virtually unlimited scaling (plus fault tolerance) at the logical / software level, on top of possibly severely limited (and faulty) resources at the physical / hardware level.
The discredited "heavy" (and over-complicated) design philosophy of centralized "legacy" dev teams such as Core / Blockstream (requiring every single node to download, store and verify the massively growing blockchain, and pinning their hopes on non-existent off-chain vaporware such as the so-called "Lightning Network" which has no mathematical definition and is missing crucial components such as decentralized routing) is doomed to failure, and will be out-competed by simpler on-chain "lightweight" distributed approaches such as distributed trustless Merkle trees or BUIP024's "Address Sharding" emerging from independent devs such as u/thezerg1 (involved with Bitcoin Unlimited).
No one in their right mind would expect Google's vast search engine to fit entirely on a Raspberry Pi behind a crappy Internet connection - and no one in their right mind should expect Bitcoin's vast financial network to fit entirely on a Raspberry Pi behind a crappy Internet connection either.
Any "normal" (ie, competent) company with $76 million to spend could provide virtually unlimited on-chain scaling for Bitcoin in a matter of months - simply by working with devs who would just go ahead and apply the existing obvious mature successful tried-and-true "recipes" for solving "embarrassingly parallel" search problems in massive search spaces, based on standard DISTRIBUTED COMPUTING approaches like Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture). The fact that Blockstream / Core devs refuse to consider any standard DISTRIBUTED COMPUTING approaches just proves that they're "embarrassingly stupid" - and the only way Bitcoin will succeed is by routing around their damage.
Proven, mature sharding architectures like the ones powering Google Search, [email protected], [email protected], or PrimeGrid will allow Bitcoin to achieve virtually unlimited on-chain scaling, with minimal disruption to the existing Bitcoin network topology and mining and wallet software.
Longer Summary:
People who argue that "Bitcoin can't scale" - because it involves major physical / hardware requirements (lots of processing power, upload bandwidth, storage space) - are at best simply misinformed or incompetent - or at worst outright lying to you.
Bitcoin mainly involves searching the blockchain to prevent double-spends - and so it is similar to many other projects involving "embarrassingly parallel" searching in massive search spaces - like Google Search, [email protected], [email protected], or PrimeGrid.
But there's a big difference between those long-running wildly successful massively distributed infinitely scalable parallel computing projects, and Bitcoin.
Those other projects do their data storage and processing across a distributed network. But Bitcoin (under the misguided "leadership" of Core / Blockstream devs) instists on a fatally flawed design philosophy where every individual node must be able to download, store and verify the system's entire data structure. And it's even wore than that - they want to let the least powerful nodes in the system dictate the resource requirements for everyone else.
Meanwhile, those other projects are all based on some kind of "distributed computing" involving "sharding". They achieve massive scaling by adding a virtually unlimited (and fault-tolerant) logical / software layer on top of the underlying resource-constrained / limited physical / hardware layer - using approaches like Google's MapReduce algorithm or Berkeley's Open Infrastructure for Network Computing (BOINC) grid computing architecture.
This shows that it is a fundamental error to continue insisting on viewing an individual Bitcoin "node" as the fundamental "unit" of the Bitcoin network. Coordinated distributed pools already exist for mining the blockchain - and eventually coordinated distributed trustless architectures will also exist for verifying and querying it. Any architecture or design philosophy where a single "node" is expected to be forever responsible for storing or verifying the entire blockchain is the wrong approach, and is doomed to failure.
The most well-known example of this doomed approach is Blockstream / Core's "roadmap" - which is based on two disastrously erroneous design requirements:
  • Core / Blockstream erroneously insist that the entire blockchain must always be downloadable, storable and verifiable on a single node, as dictated by the least powerful nodes in the system (eg, u/bitusher in Costa Rica), or u/Luke-Jr in the underserved backwoods of Florida); and
  • Core / Blockstream support convoluted, incomplete off-chain scaling approaches such as the so-called "Lightning Network" - which lacks a mathematical foundation, and also has some serious gaps (eg, no solution for decentralized routing).
Instead, the future of Bitcoin will inevitably be based on unlimited on-chain scaling, where all of Bitcoin's existing algorithms and data structures and networking are essentially preserved unchanged / as-is - but they are distributed at the logical / software level using sharding approaches such as u/thezerg1's BUIP024 or distributed trustless Merkle trees.
These kinds of sharding architectures will allow individual nodes to use a minimum of physical resources to access a maximum of logical storage and processing resources across a distributed network with virtually unlimited on-chain scaling - where every node will be able to use and verify the entire blockchain without having to download and store the whole thing - just like Google Search, [email protected], [email protected], or PrimeGrid and other successful distributed sharding-based projects have already been successfully doing for years.
Details:
Sharding, which has been so successful in many other areas, is a topic that keeps resurfacing in various shapes and forms among independent Bitcoin developers.
The highly successful track record of sharding architectures on other projects involving "embarrassingly parallel" massive search problems (harnessing resource-constrained machines at the physical level into a distributed network at the logical level, in order to provide fault tolerance and virtually unlimited scaling searching for web pages, interstellar radio signals, protein sequences, or prime numbers in massive search spaces up to hundreds of terabytes in size) provides convincing evidence that sharding architectures will also work for Bitcoin (which also requires virtually unlimited on-chain scaling, searching the ever-expanding blockchain for previous "spends" from an existing address, before appending a new transaction from this address to the blockchain).
Below are some links involving proposals for sharding Bitcoin, plus more discussion and related examples.
BUIP024: Extension Blocks with Address Sharding
https://np.reddit.com/btc/comments/54afm7/buip024_extension_blocks_with_address_sharding/
Why aren't we as a community talking about Sharding as a scaling solution?
https://np.reddit.com/Bitcoin/comments/3u1m36/why_arent_we_as_a_community_talking_about/
(There are some detailed, partially encouraging comments from u/petertodd in that thread.)
[Brainstorming] Could Bitcoin ever scale like BitTorrent, using something like "mempool sharding"?
https://np.reddit.com/btc/comments/3v070a/brainstorming_could_bitcoin_ever_scale_like/
[Brainstorming] "Let's Fork Smarter, Not Harder"? Can we find some natural way(s) of making the scaling problem "embarrassingly parallel", perhaps introducing some hierarchical (tree) structures or some natural "sharding" at the level of the network and/or the mempool and/or the blockchain?
https://np.reddit.com/btc/comments/3wtwa7/brainstorming_lets_fork_smarter_not_harder_can_we/
"Braiding the Blockchain" (32 min + Q&A): We can't remove all sources of latency. We can redesign the "chain" to tolerate multiple simultaneous writers. Let miners mine and validate at the same time. Ideal block time / size / difficulty can become emergent per-node properties of the network topology
https://np.reddit.com/btc/comments/4su1gf/braiding_the_blockchain_32_min_qa_we_cant_remove/
Some kind of sharding - perhaps based on address sharding as in BUIP024, or based on distributed trustless Merkle trees as proposed earlier by u/thezerg1 - is very likely to turn out to be the simplest, and safest approach towards massive on-chain scaling.
A thought experiment showing that we already have most of the ingredients for a kind of simplistic "instant sharding"
A simplistic thought experiment can be used to illustrate how easy it could be to do sharding - with almost no changes to the existing Bitcoin system.
Recall that Bitcoin addresses and keys are composed from an alphabet of 58 characters. So, in this simplified thought experiment, we will outline a way to add a kind of "instant sharding" within the existing system - by using the last character of each address in order to assign that address to one of 58 shards.
(Maybe you can already see where this is going...)
Similar to vanity address generation, a user who wants to receive Bitcoins would be required to generate 58 different receiving addresses (each ending with a different character) - and, similarly, miners could be required to pick one of the 58 shards to mine on.
Then, when a user wanted to send money, they would have to look at the last character of their "send from" address - and also select a "send to" address ending in the same character - and presto! we already have a kind of simplistic "instant sharding". (And note that this part of the thought experiment would require only the "softest" kind of soft fork: indeed, we haven't changed any of the code at all, but instead we simply adopted a new convention by agreement, while using the existing code.)
Of course, this simplistic "instant sharding" example would still need a few more features in order to be complete - but they'd all be fairly straightforward to provide:
  • A transaction can actually send from multiple addresses, to multiple addresses - so the approach of simply looking at the final character of a single (receive) address would not be enough to instantly assign a transaction to a particular shard. But a slightly more sophisticated decision criterion could easily be developed - and computed using code - to assign every transaction to a particular shard, based on the "from" and "to" addresses in the transaction. The basic concept from the "simplistic" example would remain the same, sharding the network based on some characteristic of transactions.
  • If we had 58 shards, then the mining reward would have to be decreased to 1/58 of what it currently is - and also the mining hash power on each of the shards would end up being roughly 1/58 of what it is now. In general, many people might agree that decreased mining rewards would actually be a good thing (spreading out mining rewards among more people, instead of the current problems where mining is done by about 8 entities). Also, network hashing power has been growing insanely for years, so we probably have way more than enough needed to secure the network - after all, Bitcoin was secure back when network hash power was 1/58 of what it is now.
  • This simplistic example does not handle cases where you need to do "cross-shard" transactions. But it should be feasible to implement such a thing. The various proposals from u/thezerg1 such as BUIP024 do deal with "cross-shard" transactions.
(Also, the fact that a simplified address-based sharding mechanics can be outlined in just a few paragraphs as shown here suggests that this might be "simple and understandable enough to actually work" - unlike something such as the so-called "Lightning Network", which is actually just a catchy-sounding name with no clearly defined mechanics or mathematics behind it.)
Addresses are plentiful, and can be generated locally, and you can generate addresses satisfying a certain pattern (eg ending in a certain character) the same way people can already generate vanity addresses. So imposing a "convention" where the "send" and "receive" address would have to end in the same character (and where the miner has to only mine transactions in that shard) - would be easy to understand and do.
Similarly, the earlier solution proposed by u/thezerg1, involving distributed trustless Merkle trees, is easy to understand: you'd just be distributing the Merkle tree across multiple nodes, while still preserving its immutablity guarantees.
Such approaches don't really change much about the actual system itself. They preserve the existing system, and just split its data structures into multiple pieces, distributed across the network. As long as we have the appropriate operators for decomposing and recomposing the pieces, then everything should work the same - but more efficiently, with unlimited on-chain scaling, and much lower resource requirements.
The examples below show how these kinds of "sharding" approaches have already been implemented successfully in many other systems.
Massive search is already efficiently performed with virtually unlimited scaling using divide-and-conquer / decompose-and-recompose approaches such as MapReduce and BOINC.
Every time you do a Google search, you're using Google's MapReduce algorithm to solve an embarrassingly parallel problem.
And distributed computing grids using the Berkeley Open Infrastructure for Network Computing (BOINC) are constantly setting new records searching for protein combinations, prime numbers, or radio signals from possible intelligent life in the universe.
We all use Google to search hundreds of terabytes of data on the web and get results in a fraction of a second - using cheap "commodity boxes" on the server side, and possibly using limited bandwidth on the client side - with fault tolerance to handle crashing servers and dropped connections.
Other examples are [email protected], [email protected] and PrimeGrid - involving searching massive search spaces for protein sequences, interstellar radio signals, or prime numbers hundreds of thousands of digits long. Each of these examples uses sharding to decompose a giant search space into smaller sub-spaces which are searched separately in parallel and then the resulting (sub-)solutions are recomposed to provide the overall search results.
It seems obvious to apply this tactic to Bitcoin - searching the blockchain for existing transactions involving a "send" from an address, before appending a new "send" transaction from that address to the blockchain.
Some people might object that those systems are different from Bitcoin.
But we should remember that preventing double-spends (the main thing that the Bitcoin does) is, after all, an embarrassingly parallel massive search problem - and all of these other systems also involve embarrassingly parallel massive search problems.
The mathematics of Google's MapReduce and Berkeley's BOINC is simple, elegant, powerful - and provably correct.
Google's MapReduce and Berkeley's BOINC have demonstrated that in order to provide massive scaling for efficient searching of massive search spaces, all you need is...
  • an appropriate "decompose" operation,
  • an appropriate "recompose" operation,
  • the necessary coordination mechanisms
...in order to distribute a single problem across multiple, cheap, fault-tolerant processors.
This allows you to decompose the problem into tiny sub-problems, solving each sub-problem to provide a sub-solution, and then recompose the sub-solutions into the overall solution - gaining virtually unlimited scaling and massive efficiency.
The only "hard" part involves analyzing the search space in order to select the appropriate DECOMPOSE and RECOMPOSE operations which guarantee that recomposing the "sub-solutions" obtained by decomposing the original problem is equivalent to the solving the original problem. This essential property could be expressed in "pseudo-code" as follows:
  • (DECOMPOSE ; SUB-SOLVE ; RECOMPOSE) = (SOLVE)
Selecting the appropriate DECOMPOSE and RECOMPOSE operations (and implementing the inter-machine communication coordination) can be somewhat challenging, but it's certainly doable.
In fact, as mentioned already, these things have already been done in many distributed computing systems. So there's hardly any "original work to be done in this case. All we need to focus on now is translating the existing single-processor architecture of Bitcoin to a distributed architecture, adopting the mature, proven, efficient "recipes" provided by the many examples of successful distributed systems already up and running like such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture).
That's what any "competent" company with $76 million to spend would have done already - simply work with some devs who know how to implement open-source distributed systems, and focus on adapting Bitcoin's particular data structures (merkle trees, hashed chains) to a distributed environment. That's a realistic roadmap that any team of decent programmers with distributed computing experience could easily implement in a few months, and any decent managers could easily manage and roll out on a pre-determined schedule - instead of all these broken promises and missed deadlines and non-existent vaporware and pathetic excuses we've been getting from the incompetent losers and frauds involved with Core / Blockstream.
ASIDE: MapReduce and BOINC are based on math - but the so-called "Lightning Network" is based on wishful thinking involving kludges on top of workarounds on top of hacks - which is how you can tell that LN will never work.
Once you have succeeded in selecting the appropriate mathematical DECOMPOSE and RECOMPOSE operations, you get simple massive scaling - and it's also simple for anyone to verify that these operations are correct - often in about a half-page of math and code.
An example of this kind of elegance and brevity (and provable correctness) involving compositionality can be seen in this YouTube clip by the accomplished mathematician Lucius Greg Meredith presenting some operators for scaling Ethereum - in just a half page of code:
https://youtu.be/uzahKc_ukfM?t=1101
Conversely, if you fail to select the appropriate mathematical DECOMPOSE and RECOMPOSE operations, then you end up with a convoluted mess of wishful thinking - like the "whitepaper" for the so-called "Lightning Network", which is just a cool-sounding name with no actual mathematics behind it.
The LN "whitepaper" is an amateurish, non-mathematical meandering mishmash of 60 pages of "Alice sends Bob" examples involving hacks on top of workarounds on top of kludges - also containing a fatal flaw (a lack of any proposed solution for doing decentralized routing).
The disaster of the so-called "Lightning Network" - involving adding never-ending kludges on top of hacks on top of workarounds (plus all kinds of "timing" dependencies) - is reminiscent of the "epicycles" which were desperately added in a last-ditch attempt to make Ptolemy's "geocentric" system work - based on the incorrect assumption that the Sun revolved around the Earth.
This is how you can tell that the approach of the so-called "Lightning Network" is simply wrong, and it would never work - because it fails to provide appropriate (and simple, and provably correct) mathematical DECOMPOSE and RECOMPOSE operations in less than a single page of math and code.
Meanwhile, sharding approaches based on a DECOMPOSE and RECOMPOSE operation are simple and elegant - and "functional" (ie, they don't involve "procedural" timing dependencies like keeping your node running all the time, or closing out your channel before a certain deadline).
Bitcoin only has 6,000 nodes - but the leading sharding-based projects have over 100,000 nodes, with no financial incentives.
Many of these sharding-based projects have many more nodes than the Bitcoin network.
The Bitcoin network currently has about 6,000 nodes - even though there are financial incentives for running a node (ie, verifying your own Bitcoin balance.
[email protected] and [email protected] each have over 100,000 active users - even though these projects don't provide any financial incentives. This higher number of users might be due in part the the low resource demands required in these BOINC-based projects, which all are based on sharding the data set.
[email protected]
As part of the client-server network architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project's database servers, where the units are compiled into an overall simulation.
In 2007, Guinness World Records recognized [email protected] as the most powerful distributed computing network. As of September 30, 2014, the project has 107,708 active CPU cores and 63,977 active GPUs for a total of 40.190 x86 petaFLOPS (19.282 native petaFLOPS). At the same time, the combined efforts of all distributed computing projects under BOINC totals 7.924 petaFLOPS.
[email protected]
Using distributed computing, [email protected] sends the millions of chunks of data to be analyzed off-site by home computers, and then have those computers report the results. Thus what appears an onerous problem in data analysis is reduced to a reasonable one by aid from a large, Internet-based community of borrowed computer resources.
Observational data are recorded on 2-terabyte SATA hard disk drives at the Arecibo Observatory in Puerto Rico, each holding about 2.5 days of observations, which are then sent to Berkeley. Arecibo does not have a broadband Internet connection, so data must go by postal mail to Berkeley. Once there, it is divided in both time and frequency domains work units of 107 seconds of data, or approximately 0.35 megabytes (350 kilobytes or 350,000 bytes), which overlap in time but not in frequency. These work units are then sent from the [email protected] server over the Internet to personal computers around the world to analyze.
Data is merged into a database using [email protected] computers in Berkeley.
The [email protected] distributed computing software runs either as a screensaver or continuously while a user works, making use of processor time that would otherwise be unused.
Active users: 121,780 (January 2015)
PrimeGrid
PrimeGrid is a distributed computing project for searching for prime numbers of world-record size. It makes use of the Berkeley Open Infrastructure for Network Computing (BOINC) platform.
Active users 8,382 (March 2016)
MapReduce
A MapReduce program is composed of a Map() procedure (method) that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() method that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies).
How can we go about developing sharding approaches for Bitcoin?
We have to identify a part of the problem which is in some sense "invariant" or "unchanged" under the operations of DECOMPOSE and RECOMPOSE - and we also have to develop a coordination mechanism which orchestrates the DECOMPOSE and RECOMPOSE operations among the machines.
The simplistic thought experiment above outlined an "instant sharding" approach where we would agree upon a convention where the "send" and "receive" address would have to end in the same character - instantly providing a starting point illustrating some of the mechanics of an actual sharding solution.
BUIP024 involves address sharding and deals with the additional features needed for a complete solution - such as cross-shard transactions.
And distributed trustless Merkle trees would involve storing Merkle trees across a distributed network - which would provide the same guarantees of immutability, while drastically reducing storage requirements.
So how can we apply ideas like MapReduce and BOINC to providing massive on-chain scaling for Bitcoin?
First we have to examine the structure of the problem that we're trying to solve - and we have to try to identify how the problem involves a massive search space which can be decomposed and recomposed.
In the case of Bitcoin, the problem involves:
  • sequentializing (serializing) APPEND operations to a blockchain data structure
  • in such a way as to avoid double-spends
Can we view "preventing Bitcoin double-spends" as a "massive search space problem"?
Yes we can!
Just like Google efficiently searches hundreds of terabytes of web pages for a particular phrase (and [email protected], [email protected], PrimeGrid etc. efficiently search massive search spaces for other patterns), in the case of "preventing Bitcoin double-spends", all we're actually doing is searching a massive seach space (the blockchain) in order to detect a previous "spend" of the same coin(s).
So, let's imagine how a possible future sharding-based architecture of Bitcoin might look.
We can observe that, in all cases of successful sharding solutions involving searching massive search spaces, the entire data structure is never stored / searched on a single machine.
Instead, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) a "virtual" layer or grid across multiple machines - allowing the data structure to be distributed across all of them, and allowing users to search across all of them.
This suggests that requiring everyone to store 80 Gigabytes (and growing) of blockchain on their own individual machine should no longer be a long-term design goal for Bitcoin.
Instead, in a sharding environment, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) should allow everyone to only store a portion of the blockchain on their machine - while also allowing anyone to search the entire blockchain across everyone's machines.
This might involve something like BUIP024's "address sharding" - or it could involve something like distributed trustless Merkle trees.
In either case, it's easy to see that the basic data structures of the system would remain conceptually unaltered - but in the sharding approaches, these structures would be logically distributed across multiple physical devices, in order to provide virtually unlimited scaling while dramatically reducing resource requirements.
This would be the most "conservative" approach to scaling Bitcoin: leaving the data structures of the system conceptually the same - and just spreading them out more, by adding the appropriately defined mathematical DECOMPOSE and RECOMPOSE operators (used in successful sharding approaches), which can be easily proven to preserve the same properties as the original system.
Conclusion
Bitcoin isn't the only project in the world which is permissionless and distributed.
Other projects (BOINC-based permisionless decentralized [email protected], [email protected], and PrimeGrid - as well as Google's (permissioned centralized) MapReduce-based search engine) have already achieved unlimited scaling by providing simple mathematical DECOMPOSE and RECOMPOSE operations (and coordination mechanisms) to break big problems into smaller pieces - without changing the properties of the problems or solutions. This provides massive scaling while dramatically reducing resource requirements - with several projects attracting over 100,000 nodes, much more than Bitcoin's mere 6,000 nodes - without even offering any of Bitcoin's financial incentives.
Although certain "legacy" Bitcoin development teams such as Blockstream / Core have been neglecting sharding-based scaling approaches to massive on-chain scaling (perhaps because their business models are based on misguided off-chain scaling approaches involving radical changes to Bitcoin's current successful network architecture, or even perhaps because their owners such as AXA and PwC don't want a counterparty-free new asset class to succeed and destroy their debt-based fiat wealth), emerging proposals from independent developers suggest that on-chain scaling for Bitcoin will be based on proven sharding architectures such as MapReduce and BOINC - and so we should pay more attention to these innovative, independent developers who are pursuing this important and promising line of research into providing sharding solutions for virtually unlimited on-chain Bitcoin scaling.
submitted by ydtm to btc [link] [comments]

BOINC 7.4.36 released. Suspending GPUs should not suspend Bitcoin Miners, up to 64 coprocessors support and more

submitted by gamer11200 to BOINC [link] [comments]

Gridcoin 5.0.0.0-Mandatory "Fern" Release

https://github.com/gridcoin-community/Gridcoin-Research/releases/tag/5.0.0.0
Finally! After over ten months of development and testing, "Fern" has arrived! This is a whopper. 240 pull requests merged. Essentially a complete rewrite that was started with the scraper (the "neural net" rewrite) in "Denise" has now been completed. Practically the ENTIRE Gridcoin specific codebase resting on top of the vanilla Bitcoin/Peercoin/Blackcoin vanilla PoS code has been rewritten. This removes the team requirement at last (see below), although there are many other important improvements besides that.
Fern was a monumental undertaking. We had to encode all of the old rules active for the v10 block protocol in new code and ensure that the new code was 100% compatible. This had to be done in such a way as to clear out all of the old spaghetti and ring-fence it with tightly controlled class implementations. We then wrote an entirely new, simplified ruleset for research rewards and reengineered contracts (which includes beacon management, polls, and voting) using properly classed code. The fundamentals of Gridcoin with this release are now on a very sound and maintainable footing, and the developers believe the codebase as updated here will serve as the fundamental basis for Gridcoin's future roadmap.
We have been testing this for MONTHS on testnet in various stages. The v10 (legacy) compatibility code has been running on testnet continuously as it was developed to ensure compatibility with existing nodes. During the last few months, we have done two private testnet forks and then the full public testnet testing for v11 code (the new protocol which is what Fern implements). The developers have also been running non-staking "sentinel" nodes on mainnet with this code to verify that the consensus rules are problem-free for the legacy compatibility code on the broader mainnet. We believe this amount of testing is going to result in a smooth rollout.
Given the amount of changes in Fern, I am presenting TWO changelogs below. One is high level, which summarizes the most significant changes in the protocol. The second changelog is the detailed one in the usual format, and gives you an inkling of the size of this release.

Highlights

Protocol

Note that the protocol changes will not become active until we cross the hard-fork transition height to v11, which has been set at 2053000. Given current average block spacing, this should happen around October 4, about one month from now.
Note that to get all of the beacons in the network on the new protocol, we are requiring ALL beacons to be validated. A two week (14 day) grace period is provided by the code, starting at the time of the transition height, for people currently holding a beacon to validate the beacon and prevent it from expiring. That means that EVERY CRUNCHER must advertise and validate their beacon AFTER the v11 transition (around Oct 4th) and BEFORE October 18th (or more precisely, 14 days from the actual date of the v11 transition). If you do not advertise and validate your beacon by this time, your beacon will expire and you will stop earning research rewards until you advertise and validate a new beacon. This process has been made much easier by a brand new beacon "wizard" that helps manage beacon advertisements and renewals. Once a beacon has been validated and is a v11 protocol beacon, the normal 180 day expiration rules apply. Note, however, that the 180 day expiration on research rewards has been removed with the Fern update. This means that while your beacon might expire after 180 days, your earned research rewards will be retained and can be claimed by advertising a beacon with the same CPID and going through the validation process again. In other words, you do not lose any earned research rewards if you do not stake a block within 180 days and keep your beacon up-to-date.
The transition height is also when the team requirement will be relaxed for the network.

GUI

Besides the beacon wizard, there are a number of improvements to the GUI, including new UI transaction types (and icons) for staking the superblock, sidestake sends, beacon advertisement, voting, poll creation, and transactions with a message. The main screen has been revamped with a better summary section, and better status icons. Several changes under the hood have improved GUI performance. And finally, the diagnostics have been revamped.

Blockchain

The wallet sync speed has been DRASTICALLY improved. A decent machine with a good network connection should be able to sync the entire mainnet blockchain in less than 4 hours. A fast machine with a really fast network connection and a good SSD can do it in about 2.5 hours. One of our goals was to reduce or eliminate the reliance on snapshots for mainnet, and I think we have accomplished that goal with the new sync speed. We have also streamlined the in-memory structures for the blockchain which shaves some memory use.
There are so many goodies here it is hard to summarize them all.
I would like to thank all of the contributors to this release, but especially thank @cyrossignol, whose incredible contributions formed the backbone of this release. I would also like to pay special thanks to @barton2526, @caraka, and @Quezacoatl1, who tirelessly helped during the testing and polishing phase on testnet with testing and repeated builds for all architectures.
The developers are proud to present this release to the community and we believe this represents the starting point for a true renaissance for Gridcoin!

Summary Changelog

Accrual

Changed

Most significantly, nodes calculate research rewards directly from the magnitudes in EACH superblock between stakes instead of using a two- or three- point average based on a CPID's current magnitude and the magnitude for the CPID when it last staked. For those long-timers in the community, this has been referred to as "Superblock Windows," and was first done in proof-of-concept form by @denravonska.

Removed

Beacons

Added

Changed

Removed

Unaltered

As a reminder:

Superblocks

Added

Changed

Removed

Voting

Added

Changed

Removed

Detailed Changelog

[5.0.0.0] 2020-09-03, mandatory, "Fern"

Added

Changed

Removed

Fixed

submitted by jamescowens to gridcoin [link] [comments]

Why not use “BOINC-like Credit System” instead of Bitcoin?

Why not use “BOINC-like Credit System” instead of Bitcoin? submitted by loukiosvalentine79 to BOINC [link] [comments]

Grid computing problem (including BOINC) and BitCoin

I started using BitCoin mining software and that made me think that if I can donate my computing power to money, why not donate it to science. So I installed BOINC, looked through all the projects, and NONE of them had a list of results achieved through the cumulative power of computing that those projects harness.
In BitCoin at least you know that you're solving previous transactions, so you're doing something that's worth it (for the community) AND you can get paid for it, if you join the pool.
In BOINC my fans are running at max capacity louder than anything else, the computer overheats like crazy, but I don't even have a way of accessing results, only a stupid leaderboard that doesn't mean anything for me.
Am I doing something wrong? Or is it like I'm missing something? What do you think about it?
submitted by peredatchik to BOINC [link] [comments]

Bitcoin Utopia: probably the most meaningless, ridiculous BOINC project

Bitcoin Utopia: probably the most meaningless, ridiculous BOINC project submitted by abingor to BOINC [link] [comments]

Bitcoin mentioned around Reddit: BOINC Resources - Anyone got online lessons/tutorials for an Amateur? RE: building a decent rig + getting onboard the boinctrain /r/BOINC

Bitcoin mentioned around Reddit: BOINC Resources - Anyone got online lessons/tutorials for an Amateur? RE: building a decent rig + getting onboard the boinctrain /BOINC submitted by BitcoinAllBot to BitcoinAll [link] [comments]

Bitcoin mentioned around Reddit: Problem installing BOINC client /r/BOINC

Bitcoin mentioned around Reddit: Problem installing BOINC client /BOINC submitted by BitcoinAllBot to BitcoinAll [link] [comments]

just wondering what are some p2p systems out there besides BOINC, torrents and bitcoins?

is their anything like a p2p domain name? or p2p voting yet?
submitted by techtakular to technology [link] [comments]

Bitcoin Utopia, Gridcoin и Boinc

Как многие могли заметить, недавно в белый список Gridcoin проектов был включен весьма специфичный для Boinc проект - Bitcoin Utopia.
Лично я отрицательно отношусь к включению этого проекта в белый список. Для этого есть следующие причины
1) BU по своим свойствам противоречит главной миссии Gridcoin Research. А именно. Максимум КПД вычислительных мощностей должно расходоваться на благо науки.
2) Статистика по Boinc кредитам ужасная. Складывается впечатление, что команда GR сейчас только и занимается тем, что работает над BU, хотя на самом деле это не так.
3) Не было открытого голосования по включению BU в белый список.
И последнее,
4) За выполненную работу по проекту BU выплачиваются не только GRC но и BTC. Не важно при этом, что BTC не попадает в кошелек человека выполнившего работу. Мы имеем взаимосвязанную крипто экономику, и это является в любом случае является дополнительной эмиссией, которая в конечном итоге удешевляет обе крипты. Причем GRC изначально находится в худшем положении, так как она значительно слабее BTC.
Даже если пункт 2 будет исправлен, я буду против включения BU в белый список
submitted by vladare to russiangridcoin [link] [comments]

Notional idea for bitcoin variant using BOINC

It wouldn't stand up on it's own right now, but technically, though the calculated proof-of-work hashes provide value in securing the bitcoin network, the extra difficulty added on to them only helps to regulate the pace of coin creation.
BOINC (http://boinc.berkeley.edu/) implements open source volunteer based distributed computing (like [email protected] or [email protected])... if we could figure out a way to make clients do a regular low difficulty hash but ALSO complete some difficulty-determined number of work units via BOINC, it would really accomplish something with all that computing power, and would lower one of the major criticisms of bitcoin from an environmental/efficiency standpoint...
anyone know anything about BOINC to know how feasible this might be to do a proof of concept at least in a bitcoin fork?
submitted by childermass to Bitcoin [link] [comments]

​just wondering what are some p2p systems out there besides BOINC, torrents and bitcoins?

is their anything like a p2p domain name? or p2p voting yet?
submitted by techtakular to AskReddit [link] [comments]

Mining a vaccine for COVID-19 might be the best investment you ever make

[email protected] (https://boinc.bakerlab.org/rosetta/) is a volunteer distributed computing project on the BOINC platform that is searching for solutions to the Coronavirus problem, it is CPU based and has a constant flow of work, some of which is COVID-19 based.
Gridcoin rewards people who do work on that project with its native currency (GRC) and community members are Raining additional GRC onto people who are crunching the project. More about BOINC, Rosetta and a setup tutorial here: https://www.youtube.com/watch?v=81KSpW4gRTU&feature=youtu.be
Gridcoin has been around since 2013 and is one of the most actively developed coins, the community is large, dedicated and decentralized, major decisions are made via on-chain voting.
submitted by backward-stash to CryptoCurrency [link] [comments]

Lightweight dreamlab like application for homelab

Hey guys,
I recently came across dreamlab. If you don't know what it is, it's an app you install that uses your phones processing power when it's on charge to help medical research.
I was wondering if anyone knows of anything similar I could stick in a vm on my server. I only want to assign one i5 2500k core to the vm, so which project would benefit the most? I don't think it would be good enough for folding at home.
Edit: I have my old android phone running dreamlab plugged into my server so that's doing something now at least
submitted by fusrohdann to homelab [link] [comments]

Going over 16 GB of RAM is pointless.... thoughts?

I'm a Mac tech and I often have to decide how much I should upgrade someone's computer. People often thing that RAM = SPEED, thus the more RAM, the more speed. My general rule of thumb is that for regular use, 8 GB is plenty for almost everyone. The idea of more always being better comes from the days when no computer could hold "enough" or at least no one could afford "enough". But these days, you can easily load up a machine.
But some people are power users, some people run a lot of heavy stuff, and for those people, 16 GB can be noticeably better than 8 GB.
For the sake of this discussion, lets completely ignore/exclude servers and virtual machines.
I'm a bit of a power user myself. I've been running 16 GB on my primary computer for about 8 years now. It's great. I've definitely used all of it, occasionally. But I've never felt like I was short on memory.
Recently I upgraded a Mac Pro I also run, from 7 GB to 32 GB. It wasn't my intention to go that high but it was a crazy deal so I went for it. This Mac is currently running Bitcoin core wallet, Dogecoin core wallet, Gridcoin wallet, BOINC with 12 full time Rosetta work units, and [email protected]. That's a lot of work for one machine. It would regularly choke to death on that when I had 7 GB. Since upgrading, I've kept an eye on it and it has never gone over 16 GB of used memory.
Of course, adding RAM is always a case of diminishing returns the higher you go. But I feel like you would be hard pressed to find reasonable situations (again, excluding servers and running virtual machines), that would really benefit AT ALL from having more than 16 GB of RAM.
Thoughts? Please be specific and bring specs/configs if you got em.
submitted by l008com to mac [link] [comments]

I have some available processing power. What are some cool applications that need some extra horsepower.

The GPU in my laptop died. It's soldered in So I replaced the motherboard and have a spare i7 6700 laying around. It makes me sick to my stomach to just think about letting that go to waste.
I'm going to mount the motherboard to a piece of aluminum and throw a SSD in it.
I already have a Linux server hosting my Nas, plex, etc.
I thought it would be a good opportunity to learn about virtualization etc.
I thought about maybe doing some distributed computing in a VM, maybe hosting some high speed nextcloud etc.
If this isn't the right sub let me know.
submitted by That_Baker_Guy to selfhosted [link] [comments]

Summary Golem Factory AMA, January 22nd 2020!

Hi all,
First of all, hope you have all had a great start to the new decade.
Golem has done an AMA on the 22nd of January and there was a lot to discuss with over 50 questions from all of you. It is somewhat understandable that a lot do not want to read the whole thing. I will try to recap the most 'important' or viable questions for the current state of development. As always, I will include a juicy Tl;dr at the end.
General Development Direction and Product Adoption
"We believe that decentralization, in the upcoming years, will not only be needed, but will be inevitable. We’re then preparing for when that time comes, as we are aware that Golem will need to grow robuster and then, the worries of low requestor supplies, will be a thing of the past. Taking into account how dependent we have become from corporations we believe that this trend will have to change and we have to be ready. Nowadays, the adoption is not going as quickly as we expected, and as quickly as we all wished for. Not only for the Golem network but for the whole cryptospace. We believe this is a moment to think progressively and overcome doubts by bulding."
(Viggith) "We're almost about to become Clay officially. Reaching this milestone gave us a lot of opportunities to learn. As the whole process took quite some time, we could observe the development decisions made in other projects, how the tech stack matured, and how expectations in the community changed shape."
"Right now we’re mostly focused on the general platform development rather than working on deep development of integrations. It doesn’t mean that we’re not actively looking for the new ones, we just want to encourage devs to build their own rather then build them interally. However, we have several examples and PoCs that are being integrated - computational chemistry software for one of the scientific research projects from IChO, the transcoding use- case is at its MVP stage. We are also investigating the usage of gWASM for gas price optimization for Ethereum, and we had a PoC for a meta-use case with tools for devops’. We are striving to improve the existing software including Task API, so that the gWASM and Task APIusers will propose new integrations."
Task API Launch and Concent
Last week, the Task API launched on Testnet which allows users to build their tasks on the Golem Network. This has been perceived to be the largest component that will transition Golem from the Brass stage, to Clay. For more information and elaboration on Concent, see this comment
"We worked on the task-api component with a small and agile team, with proper planning and preparation we were able to not have big hiccups. The largest changes where that subtask-id was only unique when combined with task-id. The largest fights with code were about windows exceptions and the issues between twisted and asyncio. Twisted is our old async library, asyncio is the new one that has better native python support.
For the mainnet release we would like to have more use-cases, better developer utilities and a lot of testing, by the team and the community. The main focus is to stabilize the task-api"
For quick examples of the Task API:
"As examples for the task-api we made two apps: `blenderapp` and `tutorialapp`. blenderapp can be run by anyone on the current testnet using these instructions. tutorialapp can be build and run locally using these instructions ( NOTE: technical ). As for tests we made unit tests on almost all levels: the apps, connecting libraries and golem core. In Golem core there are also multiple integration tests to test integration with core, one for testing blenderapp, one for testing apps while developing them."
Other Usecases for Golem on the Horizon
"We did some research on integrating BOINC and BOINC-like computations. For now it seems that it is technically possible. But it will require more effort. Recently we are planning to try to cross-compile [email protected] to gwasm application for the start and run it on mainnet. Another possible way is to use the testnet Task API as you mentioned. In general, it would be better to do so on mainnet but we need to wait for the release.
(...)Golem should be presented to science oriented researchers and be recognized in voluntary computations. That would improve our userbase, it would contribute to non-profit organizations and, of course, would bring dApps to the non blockchain world. (...) I see that there have been more discussions on reddit and we will review them and speak internally."
"Right now we’re mostly focused on the general platform development rather than working on deep development of integrations. It doesn’t mean that we’re not actively looking for the new ones, we just want to encourage devs to build their own rather then build them interally. However, we have several examples and PoCs that are being integrated - computational chemistry software for one of the scientific research projects from IChO, the transcoding use- case is at its MVP stage. We are also investigating the usage of gWASM for gas price optimization for Ethereum, and we had a PoC for a meta-use case with tools for devops’. We are striving to improve the existing software including Task API, so that the gWASM and Task APIusers will propose new integrations."
The GNT, Layer 2 and DeFi
"We crowdfunded for this project, and GNT has always been a utility token. So, in short, the narrative "the price does not matter" would be neither politically nor logically correct. However, we need to look after the best interests of all users, either golem software users or token holders, that helped us kickstart this venture.
(...)So, the GNT should be easy to use directly on the platform. Still, the token should also supplement the platform in other ways (e.g., through community-driven projects on the platform utilizing economic mechanisms envisioned and developed by the community members). The token should also be easy to use in a broader context (e.g., the DeFi), which may or may not result in a direct connection with the Golem platform."
"The current model with on-chain payments is not sustainable for Golem and other similar projects which need a trade-off between the cost of transaction, security/finality and timing. When it comes to small (aka micro) payments it’s even more important. It may happen that due to Ethereum congestion one has to pay more for the gas than the computations itself.
Here comes the idea for moving payments to layer 2 solutions. Unfortunately currently there is no such in the production which fits our platform needs, though the situation is very dynamic and we can expect suches to appear in the coming months."
"It is no secret that we have been thinking about migrating to ERC-20 for a long time. For one reason or another, we always postponed. But with all the 2019 astronomical DeFi growth, the flame was reignited(...).
We’ve been working with ETHWorks on finding the best approach for migrating GNT to ERC20.We chose to work with this particular company as our goal is to make sure that the passage to ERC20 allows the (new)GNT to be able to adapt to various matters: for instance, to be used for layer 2 scaling solutions, or Universal Logins, gassless transactions, among others. Right now, doing gassless transactions with the current GNT is cumbersome, and there are many solutions in the market that would be a great fit if GNT was ERC20.(...) As we continue the work & research, we may come up with more ideas that go beyond this, but our main focus remains on giving our users the chance to improve their Golem experience, trade without KYC (if they want to) - while we simultaneously look into all the DeFi ecosystem, and see if we can have the chance of using the token in other platforms."
New Team Members and GolemGrid
"Radek Tereszczuk has joined us in order to work on the long-term vision of the project and how it fits in the overall web 3 vision. He is an inventor, expert and consultant in areas such as IT, telecommunications, statistics, machine learning, genetics and physics. After hours, research on the new class of programming languages ​​based on his own discoveries in graph mathematics. Has 20+ years professional experience in both his own start-ups and big enterprises (mainly banks and insurance), acting as dev / analyst / architect / project and product manager.
Kuba Kucharski is joining us as Chief Product Engineering Officer to boost our product and engineering efforts. He has vast experience in leading developer teams and building product organisations. Involved in Blockchain space since 2013, some of his projects being OrisiOracles (smart contract framework built on top of Bitcoin and BitMessage) and Userfeeds (attention economy / blockchain explorer built on Ethereum)."
Phillip from GolemGrid has officially joined the team as well, after his support on mainly Rocket Chat. (chat.golem.network). When asked about his product GolemGrid (golemgrid.com) and his working abroad, he said the following:
"So far so good. No issues with working remotely for what i’m currently doing. We have an internal chat for the team members, so if there’s any questions one can just type in there and receive an answer fairly quickly.
No challenges to GolemGrid. Actually all more helpful since i've got the smart developers around to answers questions about Golem if needed. (...)
Currently there has not been any talk of GolemGrid integrating into Golem or something similar. So atm it’s purely separated as it always has been. I myself will always integrate what’s possible with Golem to GolemGrid, so whether that’s ML, Rendering or a third thing, I want to integrate it all when released.
I have plans to fiddle with the Task API in the nearest time and see if I can create something unique and useful for others."
Events and upcoming Promo
"Kubkon’s speaking at FOSSDEM 2020 in Belgium in a matter of days, then he’s heading towards ETHCC 2020 in Paris to spread the word about gWASM even further.The very eloquent Marcin Benke is speaking in April at EDCON.
MP is also doing active reach out to conferences to help with programming and intro some Golem angles we’ve not presented before, and maybe more generalized knowledge that our team can share.We’re adding more conferences every month - and most importantly, we will focus on hackathons. You can rest assured that angle will be thoroughly covered, whether local or more international initiatives, we’ll have a lot of news on this front."
"We are working on our content schedule for 2020 (including regular blogposts as we’ve been doing), planning to add tutorial videos and workshops / hackathons. The planned marketing activities for the first two quarters of 2020 are going to be targeted towards quite technical people and they are going to be heavily tech oriented (tutorials, docs, hackathons, explanatory videos, workshops etc). The promo video was a representation of the more mainstream marketing that forms part of our long-term goal."
Tl;dr
Golem has not adopted users as quickly as expected, however that goes for a lot of things in the cryptospace. The focus is currently on making the platform more robust and on UX instead of deep development and integrations. The Testnet API is live. Blenderapp can be run by anyone on the current testnet using these instructions. tutorialapp can be build and run locally using these instructions ( NOTE: technical ). Other possible use-cases for Golem are BOINC and BOINC-like computations, however these require more effort currently and have been passed on for internal discussion. Several PoCs are being integrated; chemistry software for one of the scientific research projects from IChO as well as gas-optimization calculations for gWASM and a PoC for a meta-usecase with tools for developers.
The GNT should be easy to use directly on the platform. The current model with on-chain payments is not sustainable for Golem and other similar projects which need a trade-off between the cost of transaction, security/finality and timing. When it comes to small (aka micro) payments it’s even more important. Golem has not found a layer 2 solution that satisifies their needs. Golem has been working with ETHWorks on finding the best approach for migrating GNT to ERC20. They chose to work with this particular company as our goal is to make sure that the passage to ERC20 allows the (new)GNT to be able to adapt to various matters. Their main focus remains on giving our users the chance to improve their Golem experience, trade without KYC (if they want to) - while we simultaneously look into all the DeFi ecosystem, and see if we can have the chance of using the token in other platforms.
Radek Tereszczuk, Kuba Kucharski and Phillip from GolemGrid have joined the team and they will help in the fields of long-term web3.0 vision, boosting product and engineering performance and efforts and tech support as well as community support respectively.
Golem will be speaking at FOSSDEM 2020 in Belgium in a matter of days and will then be heading towards ETHCC 2020 in Paris to spread the word about gWASM even further. They will also be speaking in April at EDCON, as well as doing active reach-outs to conferences to help with programming.

See you all next AMA!
submitted by PSVjasper99 to GolemProject [link] [comments]

Use your PC to help scientists beat cancer and other terrible diseases (and get a custom PCMR flair while at it)!

This thread has had several previous iterations, because after 6 months, reddit archives posts, so no one else can reply to them. For reference:
Version 1; Version 2; Version 3; Version 4.
AMA with the [email protected] Team

Version 5

My mother passed away in July 2016, 31 days after I was told that she was battling cancer.
Like me, many others have seen family and friends suffer with this plague. It all makes us feel helpless and desperate.
Only with scientific advancement is it possible to fight against cancer. There are little things that everyone can do to help such advancements happen sooner than they otherwise would. Our suggestion is the [email protected] project:

[email protected]

What is it?
[email protected] is a project by the Stanford University that uses our computing power to help study the process of protein folding so as to aid research on various diseases, including many forms of cancer, Alzheimer's, Huntington's, and Parkinson's. It is basically a big distributed supercomputer, and you can contribute a node!

What do I have to do to help out?

All you have to do is install a small program on your computer or android phone and it downloads a small amount of data that it analyses. When finished it then returns the results to the Stanford researchers and collects another task. You can even choose what disease research to base the bulk of your computing power on or just let it fold them all!

I don't have quad Xeons and 8 Titan XPs and 14 RX Vegas. Can I really do anything?

Everyone, no matter the hardware they possess, has a chance to help researchers fighting cancer and other illnesses, and, perhaps, make a big difference in the life of other people. Here it is running on a Pentium 4. While a modern CPU and GPU will be tenfold faster at folding every little bit can help! It's effectively making it so scientists get faster access to information! Here's an i5-6500/GTX 970
Overclock.net has a good resource for benchmarks of various GPUs.

Joining is easy and takes about 3 minutes!

  1. Visit this link and click where it says DOWNLOAD under step 1.
  2. Install the software and everything is very self-explanatory.(Windows) | (Linux) | (Mac)
  3. That's it!
You can choose any username you want, even if it's already taken, and select to be part of a team to 'compete' against other teams to see who can accomplish the most science. Our team number is 225605 but you do not have to join ours. Regardless of which team you join (or even if you don't join any) you are still helping!
In the initial config screen, you can also have a passkey e-mailed to you by Stanford University. This is a string that you can add when you set up [email protected] and that will give you extra folding points, under some criteria, if you finish your work units before scheduled! For higher end hardware this is very common. [email protected] benchmarks their tasks on an older first generation i5. This is your target to beat.

But doesn't this make my computer run very hot?

It is designed to run your system as hard as it can to get the fastest returns on results, so yes it can make your computer run hot. However, you have control as to when it runs and how quickly you want it to run. Your options are Light, Medium or Full.
You can also control whether or not you want to Fold while you're doing other things.
Many people also find that running it at medium or even high makes no difference on what concerns performance if you're only just browsing the internet.
If those options aren't enough and you want some further control, you have options.
If, instead, you are going for MAX/FULL power folding, then know that [email protected] is designed to max your parts even more than video games or benchmarks! This means your temperatures will be higher than normal so you will want to check what they reach and/or tweak your CPU/GPU fan curves so they run at a higher speed. Your CPU and GPU are designed for this.
As always, you're the one who knows best what you're trying to achieve, but know that having this software startup at boot and running on low at all times will usually have little effect on performance/temperatures.
Consider running a monitoring program alongside when folding at a constant FULL level. Some program recommendations are OCCT, MSI's Afterburner or other similar programs.

TEAM PCMR

Team "Official PCMR" (225605) has the potential to be one of the top 15 teams in the world. Here are our glorious folders! Right now we are ranked 22nd(!!!!!!) in the world and rising quickly, and there is room for you!
Even if you don't want to join us, or have another team in mind, the important thing is that you join [email protected]!
If you have any questions, ask them here! You can also refer to the previous versions of this thread, linked at the beginning of this thread.
Let's fold!
submitted by pedro19 to pcmasterrace [link] [comments]

Cryptocurrency, Is It Worth It?

I've been in the cryptocurrency industry for 4 years now. Thought much about it.
I don't mean this by any negative context, just going off by what I see. I have reason to believe that popular cryptocurrencies by marketcap are not in favor of innovation in context to creating an new form of money. How is that so?
To figure this out we first have to look at fiat currency; the currency currently with the greatest market share. Notice that because we've not had the technology of cryptocurrency greater than 10 years ago, we had to rely on the fiat framework for money. You'll notice that as a fundamental expression of value changing over to crypto has been highly difficult in part that we tend to entrust leadership to those with lots of fiat currency. If our current leadership is optimized for fiat, wouldn't spending by the rich be more directed at buying cryptocurrencies that would protect the value of fiat?
It's hard to believe that Bitcoin, Ethereum, Ripple, ect could be doing the opposite of what they claim to be doing. But bare with me. It's well known that Bitcoin and Ethereum are hoarders of electricity. The objective of those industries is to control the electricity supply whether it be renewable or not. And why might that be? I presume it's to stop any cryptocurrency from taking over fiat. At least for now.
The societal transition from fiat to cryptocurrency has inertia. That means the resources expended by fiat to buy cryptos like Bitcoin to protect itself is limited. At some point the electricity that is being wasted, or at least over-used to check and process transactions will not be optimized to creating new jobs, which is what the next generation of people will desperately need.
So if the vast majority of crypto isn't in favor of innovating, what is? Well for one I'm a big BOINC fan. They are an organization that has existed since 1995 for distributed Science. They work on complex tasks like protein folding, or challenging mathematical conjectures, and break them down into work units that you can help solve with your own computer hardware. Doing the math, if we took the electricity consumption of Bitcoin at this moment and distribute it over the wattage of the best performing GPU, we would have a super computer 40 years ahead of it's time. The possibilities are endless here.
Okay so the crypto part here is that you can in fact earn cryptocurrency with BOINC through Gridcoin a currency that has existed since 2015. If you look at the Gridcoin price it's not fairing well compared to Bitcoin, but hold on a second. It doesn't have to be Gridcoin, it could be any currency that does this. It makes sense to reward distributed Science and that idea can't be killed off.
You see if a cryptocurrency like Gridcoin were to take off, it would be a threat to fiat; in particular the fiat rich who can buy the most Bitcoin right now. All the news you've ever watched, all the products, and cryptos advertised to you at this point are more than likely to have a fiat agenda that isn't in your favor unless you're protecting your fiat investments.
Now as I've stated above there is inertia with the transition from fiat to crypto. Some of you reading this far (kodos to you for being open minded enough to anther opinion) might be very against what I'm stating. Just know that I know where you're coming from. We live in a world where fiat currency still has the highest market share, and that you've been trained your entire life to protect it, even creating and investing in cryptocurrencies that do that for you without knowing it.
submitted by burner557799 to altcoins [link] [comments]

Circulation is missing

I've been mining for a while and aquired over 35k of GRC off the exchanges. So far GRC seems like a solid idea. However circulation is lacking. Money must be exchanged for goods and services for it to be viable. The system is setup to pay users GRC in exchange for BOINC contributions. But the chain breaks after that. The only way to spend GRC is to trade it on an exchange or barter.
Has anyone talked to a print on demand company or BOINC project, about creating merchandise that can be purchased using the current value of GRC? If the shirts, mugs, hoodies, are able to be purchased through USD, EUR, CAD, and GRC then money earned can be exchanged for goods while also acting as a fund raiser for the projects.
Print on demand is low risk, low margin, but it could be easy to implement.
In essence it would be like a rewards program but the currency would start flowing until it can grow into it's own ecosystem. Since the print on demand service would likely need to dump GRC to recover their expenses this may also require that GRC set a fixed/pegged exchange rate to USD/EUCAD and of course Bitcoin on the exchanges until it gets more established.
Until money flows it's just a score board.
submitted by ShamedGod to gridcoin [link] [comments]

Cryptocoin + Boinc = GridCoin Mining Rig 6x Radeon 5870 Bitcoin mining + ScienceBoinc = GRIDCOIN Cryptocurrency BOINC and Blockchain Antminers, Bitcoin Utopia via Boinc and Cgminer Bitcoin Mining. How to use Gridcoin and BOINC

The Bitcoin.com mining pool has the lowest share reject rate (0.15%) we've ever seen. Other pools have over 0.30% rejected shares. Furthermore, the Bitcoin.com pool has a super responsive and reliable support team. Einige Bitcoin-Wallets und -Dienste unterstützen das Senden und/oder Empfangen an bzw. von Bech32-Adressen noch nicht. Full Node. Hinweis: Basierend auf Ihrer bisherigen Auswahl steht diese Option nicht zur Verfügung. Einige Wallets validieren Transaktionen und Blöcke vollständig. Nahezu alle Full Nodes helfen dem Netzwerk, indem sie Transaktionen und Blöcke von anderen Full Nodes ... Bitcoin ist Open-Source, das Design ist öffentlich, Bitcoin gehört niemandem und wird von niemandem kontrolliert. Jeder kann mitmachen. Durch viele seiner einzigartigen Eigenschaften eröffnet Bitcoin aufregende Nutzungsmöglichkeiten, die durch keines der bisherigen Zahlungssysteme abgedeckt sind. Schnelle Peer-to-Peer-Transaktionen . Weltweite Zahlungen. Geringe Transaktionsgebühren ... Bitcoinbon - Bitcoin / Ether / Litecoin / Dash / HEROcoin mit Bargeld kaufen BOINC Credits are granted as cobblestones a certain amount of BOINC work. Cobblestones are being measured as a 1/200 day fraction of CPU time on a reference computer that does 1,000 MFLOPS based on the Whetstone benchmark. Whetstone is a floating point operations benchmark. Since not only CPU flops, or computing operations can be measured it is likely that Network bandwidth and storage space ...

[index] [35803] [44990] [10291] [41527] [19782] [25365] [33507] [45311] [4161] [15891]

Cryptocoin + Boinc = GridCoin Mining Rig 6x Radeon 5870

This video is unavailable. Watch Queue Queue. Watch Queue Queue As requested here is a more detailed look at how to mine Gridcoin via the BOINC distributed scientific research platform. See more at; http://www.gridcoin.us... Bitcoin mining + Science(Boinc) = GRIDCOIN Cryptocurrency - Duration: 2:13. ... BOINC app on Android- save lives while you sleep - Duration: 8:14. EpicReviewsTech 3,973 views. 8:14 . My BOINC Rig ... Here is the reason for the confusion by many when dealing with Ants and Bitcoin Utopia. The first part of the video is what it looks like when Bitcoin Utopia is running via Boinc. When REAL mining ... Bitcoin mining + Science(Boinc) = GRIDCOIN Cryptocurrency - Duration: 2:13. Gridcoin 27,720 views. 2:13. World's Most Famous Hacker Kevin Mitnick & KnowBe4's Stu Sjouwerman Opening Keynote ...

#