FAQ

What our users frequently ask.

AlgoNode FAQ

Can our project just use the free endpoints?

Are the endpoints really free?

Sure they are. No strings attached. If you are happy with the limits of 50 requests per source IP address (~per browser) then just go ahead and use them.

Do we need to contact AlgoNode before we go live ?

Nope. While not strictly needed it just pays of to give us a heads-up. We’ll gladly help you (free of charge) to make a successful transition to the mainnet. We can monitor the public dashboard for you during the launch so that you can focus on your own backend. If you are a huge success and our limits negatively impact the experience we can issue you temporary tokens that boost the limits tenfold.

What kind of projects use the free endpoints ?

We have users that are :

  • NFT teams
  • DeFi
  • GameFi
  • Asset/NFT management
  • R&D individuals

AlgoNode helps projects of all sizes - from 0 total users to thousands of daily active users.

There are paid options - do we need them?

Projects of all sizes use our free endpoints. The primary goal of AlgoNode is to help teams run their own API infrastructure .
Until that happens teams decide on the paid packages when :

  • they need a guarantee (SLA) and a real phone number to call in case something breaks.
  • they would like to hide the API call logs from the public dashboard
  • they need dedicated resources that have known performance characteristics
  • they need endpoints located physically next to their back-end

But mostly because AlgoNode has helped them become profitable and now they can afford it 🎉

Nothing is free really…

Free endpoints exists for the following main reasons:

  • The AlgoNode team is together since 2006 but is very new to the blockchain world. We want to learn it fast and help community in the process.
    We’ve decided to focus exclusively on Algorand but do not wait for it to become number #1. We help dev teams to make sure it happens fast - in 50ms or faster ;)
  • We need a place to test our crazy ideas (API patches, infra config, DB backends)

We plan for return on this investment - without changing the rules.


Custom endpoints ?

Are there endpoints with extra functionality ?

There are, but only for commercial customers. We also charge them an arm a and a leg so they think twice before ordering. Our team hates vendor lock-in and one way to prevent this is to focus on vanilla API. Our API might be faster and less resource intensive but all the patches and designs are made public. Web 3.0 is about decentralization - dApps should be able to switch between API providers and private nodes. Custom endpoints break that and hinder decentralization.

But we need custom endpoints

Custom endpoints are 128 USDC/mo each. Still interested ? Drop us an email

Any example of custom endpoints ?
  • Endpoints that are depreciated elsewhere
  • Streaming endpoints to avoid polling (new blocks, filtered TXNs)
  • Analytical endpoints
  • FTLBlock™ endpoints (for arbitrage bots)

Who are your investors ?

Who is funding all this ?

We have no investors. The team pays all the bills.

Are you going to disappear soon ?

Current infrastructure is secured in long term contracts so no danger here. AlgoNode got a developer award from The Algorand foundation for our contributions but the business model does not depend on grants. Any extra funding that we might get will just speed up the deployment of our crazy ideas and result in more open source tools.

Is this a one man show ?

Only one of us is supporting the free endpoints on Discord so it might look like this. Check out the About Us page.


Algorand FAQ

Relay/Archive/Catchup/Participation node

What is a Catchup node ?

Algorand allows every one to run a node that is in sync with blockchain and has full account state data but not full history - just recent 1000 (or 320) blocks. When one runs a node with default config it will start building the account state starting from block ZERO and then will delete the history keeping only the recent blocks. There is an operation called catchup that tells the node to download a recent snapshot of the accounts. This allows you to skip the 4 week full sync :) To do a fast catchup on a mainnet node just issue this command:

goal node catchup $(curl -s https://algorand-catchpoints.s3.us-east-2.amazonaws.com/channel/mainnet/latest.catchpoint)

The snapshot it just a hash of the state at particular block and the actual data will come from a random relay node.

The cachtup node is great for getting latest block data and posting new transactions to the networks. Access to the full block history requires an archival node. Searching by transaction or using advanced filters requires running an indexer

Catchup nodes are very light on CPU, and only ~12GB of space (as of Apr 2022).

What is a participation node ?

So the catchup node does not participate in voting process. For that one needs to generate participation keys for an account that hold some amount of Algo. Once the keys are onlined on a catchup node it becomes a participation node and is given a chance to vote proportionally to the amount of Algo in the account. Participation keys need to be renewed after a time - so this is not maintenance free.

Participation node can be even run on Raspberry PI 4. Not sure this will hold for 10k TXN/s upgrade.

Here are some links on the subject:

You can monitor your participation with https://app.metrika.co/

What is an Archival node ?

When you set Archival: true in config.json you get an archival node (after 2-4 weeks). This mode just does not delete old blocks. You cannot fasttrack the process with a “full catchup” - no such thing exists. If you try doing a catchup on an archival node it skip downloading the full history.
The syncing process is VERY I/O intensive. Fast SSD or NVMe disk is required. But even with SSD disk the node might never sync if

  • your SSD is connected via USB instead of SATA or PCI
  • your SSD/NVMe has no heat sink - it overheats and slows down
  • you are running on a virtual machine that adds to an I/O latency (KVM, VM on a NAS, Cloud without accelerated I/O)
  • you are running on a “cloud volume” that has I/O limits or high latency

Run iostat -x 1 and iostat -x 30 to confirm that your disk has >80% utilization and is slowing down the sync. Also ioping /dev/yourdevide should give you 50 to 200 microsecond (0.05 to 0.2 millisecond) for the full sync to work.

Also it takes 1TB of disk space for the full archive. See this handy site for realtime space requirements

You can interrupt and resume the sync process at any time - no worries.

You can confirm that the node is sync by running goal node status |grep Sync . Your node is synced if the time says 0.0s

Algod Archival NODE will only provide Node API not Indexer API. If you need Indexer API you need BOTH - an archival node and an indexer with PostgreSQL server running close to one another (on the same machine even)

But I cannot wait 4 weeks, this is an emergency!

Is that case you can download an untrusted snapshot from our archive
Just read the readme file and install PIXZ utility first.

But I only have a VM or slower SSD !

Same deal as above. This might work for you as the node will only need to sync the last day which should take (hopefully) less than a day.
Just read the readme file and install PIXZ utility first.

What is a Relay node ?

Relay nodes are just relays - they pass blocks to and from other nodes. They do not participate in the consensus but are vital to the exchange of blocks. Catchup and Archival nodes get new blocks from Relay nodes. There is a default, permissioned list of relay nodes that are known to every other node. Running a relay node does not put you on the public list - one needs to apply for a spot by contacting The Algorand Foundation.

Node syncing issues

Read the section on archival node or ask on our discord channel

Indexer

Do I need the Indexer?

Node API is best for accessing a state of a particular single object - an account, an app, a block or a recently posted transaction .
So most dApps do need Indexer API as it provides endpoints with search and filtering capabilities that return multiple matching accounts, transactions etc.

Indexer requires a full archival node close by and even faster disk to fully sync.

Here are some indexer related links:

Hmm, the Internet seems to be missing an indexer setup how-to.
I think we need to create one + develop a virtual node that would allow for fast sync from algonode.io free endpoints without the need for local archival node.


Do I get paid for running ….. node ?

Nope, you just get this warm fuzzy feeling when you participate in the decentralization.