Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GnoVM] Request: add simple way to get VM cycles used on specific call #1973

Open
leohhhn opened this issue Apr 23, 2024 · 7 comments
Open
Labels

Comments

@leohhhn
Copy link
Contributor

leohhhn commented Apr 23, 2024

Description

It would be great if we can have a simple way to see how many cycles a specific call has spent. For example, adding a --measure-cycles flag to gnokey would be an idea to start with.

@leohhhn leohhhn changed the title [GnoVM] Request: add simple way to get VM cycles on specific call [GnoVM] Request: add simple way to get VM cycles used on specific call Apr 23, 2024
@deelawn
Copy link
Contributor

deelawn commented Apr 24, 2024

The gas used should be a function of the cycles spent. Would an end user care how many cycles are used?

@leohhhn
Copy link
Contributor Author

leohhhn commented Apr 24, 2024

Are we sure that gas will be a function of cycles used? 😄

In any case, might be cool for debugging/metrics/stress testing.

@deelawn
Copy link
Contributor

deelawn commented Apr 24, 2024

Yes, it will be. Is your proposal to return VM cycles used like we return gas used when sending transactions via gnokey?

@leohhhn
Copy link
Contributor Author

leohhhn commented Apr 25, 2024

Correct!

@leohhhn
Copy link
Contributor Author

leohhhn commented Apr 25, 2024

For reference - it seems that the gno test binary already has an option for displaying cycles along with some other metrics - gno test . -v -print-runtime-metrics. This is an alright workaround, ie you can make a test simply calling the function you want to check cycles for:

func TestC(t *testing.T) {
	Render("")
}

Still, I would like to see a -debug or -mertrics option when using gnokey. WDYT?

@deelawn
Copy link
Contributor

deelawn commented Apr 25, 2024

I'm not sure I'm convinced. Maybe someone else can chime in on this.

Copy link

This issue is stale because it has been open 6 months with no activity. Remove stale label or comment or this will be closed in 3 months.

@github-actions github-actions bot added the Stale label Nov 17, 2024
thehowl added a commit that referenced this issue Dec 18, 2024
<!-- please provide a detailed description of the changes made in this
pull request. -->

<details><summary>Contributors' checklist...</summary>

- [ x] Added new tests, or not needed, or not feasible
- [ x] Provided an example (e.g. screenshot) to aid review or the PR is
self-explanatory
- [ x] Updated the official documentation or not needed
- [ x] No breaking changes were made, or a `BREAKING CHANGE: xxx`
message was included in the description
- [ x] Added references to related issues and PRs
- [ x] Provided any useful hints for running manual tests
- [ ] Added new benchmarks to [generated
graphs](https://gnoland.github.io/benchmarks), if any. More info
[here](https://github.com/gnolang/gno/blob/master/.benchmarks/README.md).
</details>

We build this tool mainly for the following issues

#1826
#1828
#1281
#1973


We could also use it in the following cases. 

#1973
#2222




### `gnobench` benchmarks the time consumed for each VM CPU OpCode and
persistent access to the store, including marshalling and unmarshalling
of realm objects.

## Design consideration

### Minimum Overhead and Footprint

- Constant build flags enable benchmarking.
- Encode operations and measurements in binary.
- Dump to a local file in binary.
- No logging, printout, or network access involved.

### Accuracy

- Pause the timer for storage access while performing VM opcode
benchmarking.
- Measure each OpCode execution in nanoseconds.
- Store access includes the duration for Amino marshalling and
unmarshalling.


It is built on top of @deelawn's design and framework with @jaekwon's
input.
#2073



## Usage

### Simple mode

The benchmark only involves the GnoVM and the persistent store. It
benchmarks the bare minimum components, and the results are isolated
from other components. We use standardize gno contract to perform the
benchmarking.

This mode is the best for benchmarking each major release and/or changes
in GnoVM.

    make opcode
    make storage

### Production mode

It benchmarks the node in the production environment with minimum
overhead.
We can only benchmark with standardize the contract but also capture the
live usage in production environment.
It gives us a complete picture of the node perform.


  1. Build the production node with benchmarking flags:

`go build -tags "benchmarkingstorage benchmarkingops"
gno.land/cmd/gnoland`

2. Run the node in the production environment. It will dump benchmark
data to a benchmark.bin file.

3. call the realm contracts at `gno.land/r/x/benchmark/opcodes` and
`gno.land/r/x/benchmark/storage`

  4. Stop the server after the benchmarking session is complete.

  5. Run the following command to convert the binary dump:

  `gnobench -bin path_to_benchmark_bin`

    it converts the binary dump to results.csv and results_stats.csv.


## Results ( Examples )

The benchmarking results are stored in two files:
  1. The raw results are saved in results.csv.

  | Operation       | Elapsed Time | Disk IO Bytes |
  |-----------------|--------------|---------------|
  | OpEval          | 40333        | 0             |
  | OpPopBlock      | 208          | 0             |
  | OpHalt          | 167          | 0             |
  | OpEval          | 500          | 0             |
  | OpInterfaceType | 458          | 0             |
  | OpPopBlock      | 166          | 0             |
  | OpHalt          | 125          | 0             |
  | OpInterfaceType | 21125        | 0             |
  | OpEval          | 541          | 0             |
  | OpEval          | 209          | 0             |
  | OpInterfaceType | 334          | 0             |



2. The averages and standard deviations are summarized in
results_stats.csv.

  | Operation      | Avg Time | Avg Size | Time Std Dev | Count |
|----------------|----------|----------|--------------|-------|
| OpAdd          | 101      | 0        | 45           | 300   |
| OpAddAssign    | 309      | 0        | 1620         | 100   |
| OpArrayLit     | 242      | 0        | 170          | 700   |
| OpArrayType    | 144      | 0        | 100          | 714   |
| OpAssign       | 136      | 0        | 95           | 2900  |
| OpBand         | 92       | 0        | 30           | 100   |
| OpBandAssign   | 127      | 0        | 62           | 100   |
| OpBandn        | 97       | 0        | 54           | 100   |
| OpBandnAssign  | 125      | 0        | 113          | 100   |
| OpBinary1      | 128      | 0        | 767          | 502   |
| OpBody         | 127      | 0        | 145          | 13700 |

---------

Co-authored-by: Morgan Bazalgette <[email protected]>
omarsy pushed a commit to omarsy/gno that referenced this issue Dec 18, 2024
<!-- please provide a detailed description of the changes made in this
pull request. -->

<details><summary>Contributors' checklist...</summary>

- [ x] Added new tests, or not needed, or not feasible
- [ x] Provided an example (e.g. screenshot) to aid review or the PR is
self-explanatory
- [ x] Updated the official documentation or not needed
- [ x] No breaking changes were made, or a `BREAKING CHANGE: xxx`
message was included in the description
- [ x] Added references to related issues and PRs
- [ x] Provided any useful hints for running manual tests
- [ ] Added new benchmarks to [generated
graphs](https://gnoland.github.io/benchmarks), if any. More info
[here](https://github.com/gnolang/gno/blob/master/.benchmarks/README.md).
</details>

We build this tool mainly for the following issues

gnolang#1826
gnolang#1828
gnolang#1281
gnolang#1973


We could also use it in the following cases. 

gnolang#1973
gnolang#2222




### `gnobench` benchmarks the time consumed for each VM CPU OpCode and
persistent access to the store, including marshalling and unmarshalling
of realm objects.

## Design consideration

### Minimum Overhead and Footprint

- Constant build flags enable benchmarking.
- Encode operations and measurements in binary.
- Dump to a local file in binary.
- No logging, printout, or network access involved.

### Accuracy

- Pause the timer for storage access while performing VM opcode
benchmarking.
- Measure each OpCode execution in nanoseconds.
- Store access includes the duration for Amino marshalling and
unmarshalling.


It is built on top of @deelawn's design and framework with @jaekwon's
input.
gnolang#2073



## Usage

### Simple mode

The benchmark only involves the GnoVM and the persistent store. It
benchmarks the bare minimum components, and the results are isolated
from other components. We use standardize gno contract to perform the
benchmarking.

This mode is the best for benchmarking each major release and/or changes
in GnoVM.

    make opcode
    make storage

### Production mode

It benchmarks the node in the production environment with minimum
overhead.
We can only benchmark with standardize the contract but also capture the
live usage in production environment.
It gives us a complete picture of the node perform.


  1. Build the production node with benchmarking flags:

`go build -tags "benchmarkingstorage benchmarkingops"
gno.land/cmd/gnoland`

2. Run the node in the production environment. It will dump benchmark
data to a benchmark.bin file.

3. call the realm contracts at `gno.land/r/x/benchmark/opcodes` and
`gno.land/r/x/benchmark/storage`

  4. Stop the server after the benchmarking session is complete.

  5. Run the following command to convert the binary dump:

  `gnobench -bin path_to_benchmark_bin`

    it converts the binary dump to results.csv and results_stats.csv.


## Results ( Examples )

The benchmarking results are stored in two files:
  1. The raw results are saved in results.csv.

  | Operation       | Elapsed Time | Disk IO Bytes |
  |-----------------|--------------|---------------|
  | OpEval          | 40333        | 0             |
  | OpPopBlock      | 208          | 0             |
  | OpHalt          | 167          | 0             |
  | OpEval          | 500          | 0             |
  | OpInterfaceType | 458          | 0             |
  | OpPopBlock      | 166          | 0             |
  | OpHalt          | 125          | 0             |
  | OpInterfaceType | 21125        | 0             |
  | OpEval          | 541          | 0             |
  | OpEval          | 209          | 0             |
  | OpInterfaceType | 334          | 0             |



2. The averages and standard deviations are summarized in
results_stats.csv.

  | Operation      | Avg Time | Avg Size | Time Std Dev | Count |
|----------------|----------|----------|--------------|-------|
| OpAdd          | 101      | 0        | 45           | 300   |
| OpAddAssign    | 309      | 0        | 1620         | 100   |
| OpArrayLit     | 242      | 0        | 170          | 700   |
| OpArrayType    | 144      | 0        | 100          | 714   |
| OpAssign       | 136      | 0        | 95           | 2900  |
| OpBand         | 92       | 0        | 30           | 100   |
| OpBandAssign   | 127      | 0        | 62           | 100   |
| OpBandn        | 97       | 0        | 54           | 100   |
| OpBandnAssign  | 125      | 0        | 113          | 100   |
| OpBinary1      | 128      | 0        | 767          | 502   |
| OpBody         | 127      | 0        | 145          | 13700 |

---------

Co-authored-by: Morgan Bazalgette <[email protected]>
albttx pushed a commit that referenced this issue Jan 10, 2025
<!-- please provide a detailed description of the changes made in this
pull request. -->

<details><summary>Contributors' checklist...</summary>

- [ x] Added new tests, or not needed, or not feasible
- [ x] Provided an example (e.g. screenshot) to aid review or the PR is
self-explanatory
- [ x] Updated the official documentation or not needed
- [ x] No breaking changes were made, or a `BREAKING CHANGE: xxx`
message was included in the description
- [ x] Added references to related issues and PRs
- [ x] Provided any useful hints for running manual tests
- [ ] Added new benchmarks to [generated
graphs](https://gnoland.github.io/benchmarks), if any. More info
[here](https://github.com/gnolang/gno/blob/master/.benchmarks/README.md).
</details>

We build this tool mainly for the following issues

#1826
#1828
#1281
#1973


We could also use it in the following cases. 

#1973
#2222




### `gnobench` benchmarks the time consumed for each VM CPU OpCode and
persistent access to the store, including marshalling and unmarshalling
of realm objects.

## Design consideration

### Minimum Overhead and Footprint

- Constant build flags enable benchmarking.
- Encode operations and measurements in binary.
- Dump to a local file in binary.
- No logging, printout, or network access involved.

### Accuracy

- Pause the timer for storage access while performing VM opcode
benchmarking.
- Measure each OpCode execution in nanoseconds.
- Store access includes the duration for Amino marshalling and
unmarshalling.


It is built on top of @deelawn's design and framework with @jaekwon's
input.
#2073



## Usage

### Simple mode

The benchmark only involves the GnoVM and the persistent store. It
benchmarks the bare minimum components, and the results are isolated
from other components. We use standardize gno contract to perform the
benchmarking.

This mode is the best for benchmarking each major release and/or changes
in GnoVM.

    make opcode
    make storage

### Production mode

It benchmarks the node in the production environment with minimum
overhead.
We can only benchmark with standardize the contract but also capture the
live usage in production environment.
It gives us a complete picture of the node perform.


  1. Build the production node with benchmarking flags:

`go build -tags "benchmarkingstorage benchmarkingops"
gno.land/cmd/gnoland`

2. Run the node in the production environment. It will dump benchmark
data to a benchmark.bin file.

3. call the realm contracts at `gno.land/r/x/benchmark/opcodes` and
`gno.land/r/x/benchmark/storage`

  4. Stop the server after the benchmarking session is complete.

  5. Run the following command to convert the binary dump:

  `gnobench -bin path_to_benchmark_bin`

    it converts the binary dump to results.csv and results_stats.csv.


## Results ( Examples )

The benchmarking results are stored in two files:
  1. The raw results are saved in results.csv.

  | Operation       | Elapsed Time | Disk IO Bytes |
  |-----------------|--------------|---------------|
  | OpEval          | 40333        | 0             |
  | OpPopBlock      | 208          | 0             |
  | OpHalt          | 167          | 0             |
  | OpEval          | 500          | 0             |
  | OpInterfaceType | 458          | 0             |
  | OpPopBlock      | 166          | 0             |
  | OpHalt          | 125          | 0             |
  | OpInterfaceType | 21125        | 0             |
  | OpEval          | 541          | 0             |
  | OpEval          | 209          | 0             |
  | OpInterfaceType | 334          | 0             |



2. The averages and standard deviations are summarized in
results_stats.csv.

  | Operation      | Avg Time | Avg Size | Time Std Dev | Count |
|----------------|----------|----------|--------------|-------|
| OpAdd          | 101      | 0        | 45           | 300   |
| OpAddAssign    | 309      | 0        | 1620         | 100   |
| OpArrayLit     | 242      | 0        | 170          | 700   |
| OpArrayType    | 144      | 0        | 100          | 714   |
| OpAssign       | 136      | 0        | 95           | 2900  |
| OpBand         | 92       | 0        | 30           | 100   |
| OpBandAssign   | 127      | 0        | 62           | 100   |
| OpBandn        | 97       | 0        | 54           | 100   |
| OpBandnAssign  | 125      | 0        | 113          | 100   |
| OpBinary1      | 128      | 0        | 767          | 502   |
| OpBody         | 127      | 0        | 145          | 13700 |

---------

Co-authored-by: Morgan Bazalgette <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Triage
Development

No branches or pull requests

2 participants