Skip to content

Commit

Permalink
Adding max_seq_length to vision eval config (#1802)
Browse files Browse the repository at this point in the history
  • Loading branch information
SalmanMohammadi authored Oct 11, 2024
1 parent c5b7386 commit c4044bc
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions recipes/configs/llama3_2_vision/evaluation.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ tasks: ["mmmu_val_science"] # Defaulting to science as a good subset
limit: null
batch_size: 1
enable_kv_cache: True
max_seq_length: 8192

# Quantization specific args
# Quantization is not supported in this specific config
Expand Down

0 comments on commit c4044bc

Please sign in to comment.