AB DAO 官方 Twitter 账号升级,提醒社区注意风险
AB DAO 官方 Twitter 账号已完成升级,新账号 为:https://x.com/ABDAO_Global

Keywords: cloud storage track analysis, Filcoin business model analysis, Arweave business model analysis, Irys project introduction, valuation forecast, Irys strategic analysis
Main text:
In the distant Mesopotamian civilization, humans first had the idea of storing information: our ancestors cleverly engraved information on "enduring" stone slabs; during the Industrial Revolution, music, as a kind of information, was stored on records; and in the computer age, people invented a series of information storage hardware such as magnetic tapes, hard drives, and optical discs...
The way data is stored has always been a silhouette of the progress of the times. In 1956, IBM launched the Model 350, a machine as large as two refrigerators stacked side by side and weighing nearly a ton, yet capable of storing only 5MB of data and requiring a crane to lift it into the computer room. Despite its hefty weight, it made electronic storage a resource that businesses could pay for for the first time. This breakthrough transformed the fate of information: information no longer depended solely on fragile paper but could exist on long-lasting electromagnetic materials. In the decades that followed, hard drive manufacturers engaged in an invisible war. Companies like Seagate, Western Digital, and Hitachi continuously increased disk storage density, allowing more and more magnetic particles to fit per square inch of platter. Each technological iteration doubled capacity and reduced prices. By the 1990s, the widespread adoption of personal computers and the rise of the internet had made these hard drive manufacturers the cornerstone of the industry. In those days, storage was essentially "raw material," and the core market criterion was a single: who could offer the most efficient storage solution—that is, the best and cheapest. However, as data storage began to grow exponentially, enterprises' primary need became ensuring data stability and security. Banks, airlines, and manufacturers all relied on data for their operations, and even the smallest misstep could result in significant losses. Consequently, enterprise storage vendors like EMC and NetApp emerged, selling complete storage arrays and supporting software. The cloud storage market shifted from a one-time service to a long-term partnership, with enterprise clients signing long-term service and warranty contracts with service providers. For the first time, storage was categorized as a "business asset." By the early 21st century, the internet and mobile boom had enabled data to flow across borders. Traditional enterprise storage solutions proved cumbersome and expensive in the face of global demand. In 2006, Amazon launched the S3 service, abstracting storage into a simple API: developers no longer needed to purchase data centers and disks; with just a few lines of code, they could write files to the cloud at any time. This "on-demand, on-demand" model revolutionized developer habits and gave startups, for the first time, access to the same infrastructure as large companies. The value of cloud storage lay not in its cost but in its elasticity and ecosystem. It transformed storage from a device into an "always-on service." Dropbox and Google Drive soon brought this experience to consumers. Users no longer needed to worry about which computer their files were stored on; as long as they had an internet connection, they could seamlessly switch between their phones, tablets, and laptops. The concept of storage was transformed once again: data was no longer stored on physical devices, but in humanity's very own "cyberspace." From IBM's magnetic drums to EMC's storage arrays to AWS S3 object storage, the evolution of data storage has repeatedly demonstrated a pattern: each new leader emerged by creating, or rather, satisfying, a new data usage need. The first generation of hard drives addressed capacity issues, enterprise storage solutions met the need for stability and security, and cloud storage addressed the pain points of flexibility and scalability. However, beneath all this history, one constant remains: the excessive concentration of data ownership in the hands of cloud vendors. In today's world of data assetization, this is clearly unacceptable. Thus, Web3 entered the picture. Chapter 2: Filecoin Miner Logic and Arweave's Idealism In the Web2 system, data ownership and control are highly centralized. Whether it's Facebook's social relationships or Amazon's transaction data, it's essentially controlled by companies. Users "use" data but never truly "own" it. Companies recklessly exploit data for profit, but users are left powerless: when a personal account is blocked, their data disappears; when companies remove content due to compliance or political pressure, that information disappears from public space. Thus, calls for decentralized storage emerged. In 2015, the IPFS project proposed a new approach: using "content hashes" to find files. This means that any node that stores the file can respond to requests, solving the problem of "single point storage risk." But people soon realized that technology alone wasn't enough. Without economic incentives, nodes wouldn't be willing to store data for long periods of time. Thus, Filecoin emerged. It builds on IPFS and incorporates Tokenomics: miners provide storage space in exchange for $FIL, and the Filecoin protocol uses a complex proof-of-space-time algorithm to verify that the data is actually stored. From a design perspective, its fundamental premise is to "make storage an open market," and this works well on the supply side: as long as there are token rewards, a large number of miners will be mobilized to participate in ecosystem activities. However, they failed to consider that the market consists not only of supply but also of demand. This created a large number of free riders. Filecoin's incentives primarily focus on providing capacity and issuing proofs on time, and miners are naturally more concerned with economic benefits than with serving users. Consequently, a large number of free riders emerged. This creates a structural mismatch: extremely active supply, but lagging demand. This mismatch quickly propagates to the product layer. When evaluating Filecoin, a team requiring stable read and write performance will first ask three questions: what preparations should be made before writing, what is the uncertainty range for retrieval latency, and who is responsible if problems arise. On the other hand, when writing, real business data is often constantly updated, and Filecoin's semantics naturally favor "fixed-length, periodic, and renewing" cold storage, requiring developers to build additional indexes, version mapping, and renewal policies. When it comes to retrieval, another problem arises: if you decide to build your own CDN and cache, the marginal benefits of using Filecoin will be significantly reduced. If you rely on third-party gateways or service providers, the service relationship becomes "semi-centralized," and those in charge will question why they don't just go to the cloud. Finally, there's the boundary of responsibility: on-chain proofs cannot directly guarantee product experience. For enterprise clients, even a 1% uncertainty is enough to exclude Filecoin from critical links. The path dependence caused by incentive design also manifests itself on payers. In an ideal open market, payers should be users. However, when real demand is insufficient in the early stages, the ecosystem must use incentives to stimulate demand (for example, by offering preferential conditions for uploading certain datasets to the chain). This can boost transaction volume in the short term, but it's difficult to prove that "spontaneous, ongoing willingness to pay" truly exists. Over time, the supply-side financial model revolves around "blockchain subsidies, staking, and penalties," while the demand-side willingness to pay fluctuates around "subsidy availability and quotas," leaving the two systems uncoupled. This is why, in many success stories, you'll see news about "big data on-chain," but rarely see the closed-loop narrative of "high-frequency retrieval, continuous reuse, and profitable upper-layer products." Almost simultaneously, Arweave proposed another solution: users pay a one-time storage fee, and the network promises long-term preservation. Founder Sam Williams was inspired by history and sociology: if the past can be erased, social memory becomes unreliable. The value of this approach is self-evident: once certain values are deleted or altered, social trust erodes. Arweave's appeal lies in its ability to cash in on "future storage" through a one-time payment, while the network continuously replicates and preserves data over a long period of time. But when you put it into the context of products and business, another set of problems emerge. The first is the tension between "permanence" and "iteration." Most applications aren't written once and never updated; they're constantly revised, rolled back, and A/B-ed. The correct use of Arweave is to write each change as new content and index the latest version. This is technically feasible and not complex, but the design of the application layer remains a challenge: users only want to see the latest version, not spend time understanding an immutable time chain. The second is the ethical issue raised by permanent storage. An open network inevitably carries shady and illegal content. The Arweave protocol cannot delete it, relying on the "self-regulation" and filtering of the gateway, front-end, and indexing layers. This creates a dilemma for developers when it comes to "responsibility attribution": if you take the initiative to filter, you become the responsible party; if you don't, you risk losing customers. The third is the idealization of the economic system. Arweave's promise relies on two simple long-term assumptions: the unit cost of storage continues to decline and the network maintains replication strength for a sufficient period of time. While these plans have a high probability of success on a macro level, for a single product manager, the immediate cash flow pressure is difficult to address. After all, this implies a large, one-time write fee, and the interest alone can be prohibitive. Over time, Arweave's business has been confined to a very small market segment, and its valuation has been unable to break through. After Filecoin and Arweave opened the door to Web3 cloud storage, the cloud storage market remained largely untouched for a long time. It was during this window of opportunity that Irys emerged. Its core question is: Why can't data be activated on its own? Since the moment a data is written to storage is essentially an "event," why can't this event immediately trigger logic? If the network itself can serve as the execution environment, then data is no longer just dormant files but a unit that can drive applications. This is the design foundation of Irys. Instead of focusing on Filecoin's "mining logic" and Arweave's "permanent storage," Irys integrates storage and computing, proposing the concept of a "programmable data chain." Writing data triggers the logic, entering the network and running it directly in the Irys execution environment (IrysVM). For developers, this shifts from a two-step process to a one-step process—write and invoke. As mentioned earlier, every evolution in storage over the past half century has created new needs. Therefore, I believe Irys's foresight is particularly important in the AI era. AI models require vast amounts of data, trusted sources, and verifiable execution. Traditional storage locks data in cold storage and then passes it off-chain for processing, which is not only cumbersome but also lacks credibility. Irys envisions a self-driven data model: data that automatically "feeds" models, has its own billing and permission rules, and enables cross-organizational collaboration without the need for third-party custody. On the other hand, Irys's strength lies in integrating storage, execution, and verification into the same underlying protocol. This means that data written by different protocols can be directly read and reused, even driving more complex application logic. As the number of nodes increases, the overall value of the network will naturally grow, as data discoverability and composability continue to increase. To understand this, consider Ethereum. When it introduced smart contracts, many people didn't understand how they were any different from standard on-chain transactions. It wasn't until the emergence of financial applications like Uniswap, Aave, and Compound that people realized that smart contracts are the seeds of infinite narratives. Irys is actually doing something similar, except the target has shifted from "finance" to "data." While data is more abstract and less tangible than money, once the ecosystem builds, developers will realize: "I can build directly on other people's data output, without having to rely on external oracles or repeated data collection." This narrative is actually very similar to the path AWS took back in the day. AWS didn't win solely on "cheap storage" but rather through a comprehensive suite of SDKs, consoles, and APIs, completely locking developers into its ecosystem. Once you've used one or two AWS services, you'll quickly be drawn to the convenience of the entire AWS ecosystem. If Irys executes collaboration correctly, for example, by ensuring that high-quality data is accessible only when written to Irys, it will also create a similar value lock-in. Data on Irys will then become more than just an asset for a specific protocol; it will fuel the entire ecosystem, and this positive cycle will ultimately feed back into the data network itself and the value of its token. Chapter 4: Irys Valuation and the Market Remember, ideals are beautiful, but reality is often harsh. A forward-looking project doesn't guarantee success. Irys' first challenge is a cold start. Without real demand—with enough applications willing to consume this "programmable data"—it will degenerate into just another cheap storage solution. The second challenge is compatibility. Developers are already deeply reliant on interfaces like the EVM, IPFS, and AWS, and any new paradigm increases the learning curve. If Irys wants to gain traction, it must ensure smooth, zero-barrier-to-use adoption. The third challenge is governance. Once data can trigger logic, it creates new attack surfaces: false data for insurance fraud, malicious triggering of resource consumption, and copyright and privacy disputes. Centralized clouds rely on laws and permissions to address these issues, while decentralized protocols must address these issues through mechanisms and governance; otherwise, institutional adoption will be difficult. Therefore, whether Irys is truly effective or not will only be determined after its mainnet launch. Let's wait and see if it can, like AWS in its day, create elegant abstractions and run successful prototypes so well that developers will be willing to replace their existing patchwork solutions with it. From a historical perspective, this is the key to whether any infrastructure can dethrone the established leaders and become the next generation leader. If I were you, I would focus on the following three paths: 1. The first application scenario. Every infrastructure throughout history has had iconic application scenarios to prove its value. S3 relied on Flickr and Dropbox; Snowflake relied on real-time analytics in finance and retail. Similarly, Irys must develop one or two killer use cases, such as a real-time incentive system for health data or an automatic settlement mechanism for DePIN devices. 2. Lower the migration barrier. Developer habits are the most difficult to change. Why has EVM become the de facto standard? Because it allows people to reuse old tools and languages in new environments. Irys must avoid "re-educating the market" and instead maximize compatibility with existing habits in its interfaces, SDKs, and development experience. 3. Establish governance tools or ecosystem rules. Once data can trigger logic, it will inevitably lead to attacks and disputes: false data to defraud rewards, malicious triggering to consume resources, and gray areas of copyright ownership. If Irys can provide tools at the mechanism level to "verify data origin," "limit malicious triggering," and "embed copyright and privacy logic," it will be able to gain trust in both B2B and B2G scenarios. The intensity of the competition in the cloud sector should not be underestimated. Cloud vendors remain behemoths, assembly solutions remain flexible and inexpensive, and off-chain proof models are even more cost-effective. However, history has proven time and again that true breakthroughs don't come from fighting through the old landscape; rather, the landscape is reshaped when new habits are created and become standards. Irys must address this core issue to become a leading player.
In terms of valuation, as of the time of writing, $FIL's circulating market capitalization is 2 billion, with an FDV of 4.7 billion; $AR's circulating market capitalization is 400 million, with almost full circulation. Succinct, the ZK rolldown infrastructure for BN launched at the same time, has a $Prove circulating market capitalization of 200 million and an FDV of 1.1 billion. Considering Irys's dual concepts of AI and cloud storage, while the AI concept is currently gaining momentum in the market, the market is unlikely to assign a premium due to the enormous uncertainty in the macro environment. I believe Irys' post-TGE valuation will be as follows: 1. Low opening: 300 million to 500 million FDV; 2. Normal: 800 million to 1.2 billion FDV. Given my low risk appetite: 1. If the business progresses smoothly and can form a flywheel with Tokenomics, and the valuation is below 300 million FDV, I will buy immediately. If FDV reaches around 500 million, I will buy a small position; if it exceeds 500 million, I will wait and see. 2. If the business progresses poorly, or if Tokenomics fails to synergize with the business, I will wait and see, shifting my focus from fundamentals to technical analysis.
AB DAO 官方 Twitter 账号已完成升级,新账号 为:https://x.com/ABDAO_Global
Arweave, Arweave's working principle and significance of existence Golden Finance, this article briefly introduces Arweave's working principle and value.
Zodia Custody joins Metaco's network to provide global institutions with secure cryptocurrency storage and settlement services.
The custodian does not deal with retail clients.
BNB Chain TOP Gamefi project ERA7 plans to organize an "ERA7 Charity Showdown" on November 25 at 12:00 PM UTC.
Digital investment firm Grayscale has also declined to share proof of reserves, citing security concerns.
The hacker exploited a vulnerability in the IAVL TREE to forge a malicious withdrawal message.
Blockchains use consensus algorithms to choose who gets to verify transactions on the network — what are the differences between the two?
Johann Bornman, head of product at MMI, called the integration with Cactus Custody’s DeFi Connector “a profound DeFi capability for institutions.”