-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] AttributeError: 'Accelerator' object has no attribute 'deepspeed_config' #4143
Comments
@thyywr759, please share steps for repro? Thanks! |
when I remove the environment variable #export WANDB_LOG_MODEL=true , the problem is solved This suggests that it has nothing to do with a specific project, but rather with wandb |
Closing as issue appears to be resolved. |
I have same issue ```bash
(ft-llm) ubuntu@ip-172-31-89-151:~/llm-ft/falcon$ accelerate launch main2.py
/home/ubuntu/anaconda3/envs/ft-llm/lib/python3.9/site-packages/accelerate/utils/dataclasses.py:641: UserWarning: DeepSpeed Zero3 Init flag is only applicable for ZeRO Stage 3. Setting it to False.
warnings.warn("DeepSpeed Zero3 Init flag is only applicable for ZeRO Stage 3. Setting it to False.")
[2023-10-18 12:59:10,836] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-10-18 12:59:12,294] [INFO] [comm.py:637:init_distributed] cdb=None
[2023-10-18 12:59:12,294] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
fsdp_plugin
FullyShardedDataParallelPlugin(sharding_strategy=<ShardingStrategy.FULL_SHARD: 1>, backward_prefetch=None, mixed_precision_policy=None, auto_wrap_policy=None, cpu_offload=CPUOffload(offload_params=False), ignored_modules=None, state_dict_type=<StateDictType.FULL_STATE_DICT: 1>, state_dict_config=FullStateDictConfig(offload_to_cpu=False, rank0_only=False), optim_state_dict_config=FullOptimStateDictConfig(offload_to_cpu=False, rank0_only=False), limit_all_gathers=False, use_orig_params=False, param_init_fn=<function FullyShardedDataParallelPlugin.__post_init__.<locals>.<lambda> at 0x7f184df4a0d0>, sync_module_states=True, forward_prefetch=False, activation_checkpointing=False)
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:05<00:00, 2.94s/it]
Traceback (most recent call last):
File "/home/ubuntu/llm-ft/falcon/main2.py", line 41, in <module>
state_dict=accelerator.get_state_dict(model)
File "/home/ubuntu/anaconda3/envs/ft-llm/lib/python3.9/site-packages/accelerate/accelerator.py", line 3060, in get_state_dict
if self.deepspeed_config["zero_optimization"]["stage"] == 3:
AttributeError: 'Accelerator' object has no attribute 'deepspeed_config'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 36540) of binary: /home/ubuntu/anaconda3/envs/ft-llm/bin/python
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/ft-llm/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/home/ubuntu/anaconda3/envs/ft-llm/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main
args.func(args)
File "/home/ubuntu/anaconda3/envs/ft-llm/lib/python3.9/site-packages/accelerate/commands/launch.py", line 971, in launch_command
deepspeed_launcher(args)
File "/home/ubuntu/anaconda3/envs/ft-llm/lib/python3.9/site-packages/accelerate/commands/launch.py", line 687, in deepspeed_launcher
distrib_run.run(args)
File "/home/ubuntu/anaconda3/envs/ft-llm/lib/python3.9/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/home/ubuntu/anaconda3/envs/ft-llm/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/ubuntu/anaconda3/envs/ft-llm/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
main2.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-10-18_13:00:03
host : ip-172-31-89-151.ec2.internal
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 36540)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
|
This just happened after a 19 hour training session: {'loss': 0.7306, 'learning_rate': 6.6391128853307e-07, 'epoch': 3.95}
{'loss': 0.6796, 'learning_rate': 6.608934129171407e-07, 'epoch': 3.96}
{'loss': 0.6441, 'learning_rate': 6.578823892230079e-07, 'epoch': 3.96}
{'loss': 0.7221, 'learning_rate': 6.54878217658339e-07, 'epoch': 3.96}
{'loss': 0.6586, 'learning_rate': 6.518808984303459e-07, 'epoch': 3.96}
{'loss': 0.6562, 'learning_rate': 6.488904317457633e-07, 'epoch': 3.96}
{'loss': 0.7101, 'learning_rate': 6.459068178108818e-07, 'epoch': 3.96}
{'loss': 0.6937, 'learning_rate': 6.429300568314811e-07, 'epoch': 3.96}
{'loss': 0.7088, 'learning_rate': 6.399601490128748e-07, 'epoch': 3.96}
{'loss': 0.6441, 'learning_rate': 6.369970945599324e-07, 'epoch': 3.96}
{'loss': 0.7192, 'learning_rate': 6.340408936770126e-07, 'epoch': 3.96}
{'loss': 0.699, 'learning_rate': 6.310915465680412e-07, 'epoch': 3.96}
{'loss': 0.7153, 'learning_rate': 6.28149053436422e-07, 'epoch': 3.96}
{'loss': 0.7148, 'learning_rate': 6.252134144851374e-07, 'epoch': 3.96}
{'loss': 0.6594, 'learning_rate': 6.222846299166585e-07, 'epoch': 3.96}
{'loss': 0.7242, 'learning_rate': 6.193626999330127e-07, 'epoch': 3.96}
{'loss': 0.6717, 'learning_rate': 6.164476247357165e-07, 'epoch': 3.96}
{'loss': 0.6647, 'learning_rate': 6.135394045258647e-07, 'epoch': 3.96}
{'loss': 0.6715, 'learning_rate': 6.106380395040301e-07, 'epoch': 3.96}
{'loss': 0.6911, 'learning_rate': 6.077435298703527e-07, 'epoch': 3.96}
{'loss': 0.7189, 'learning_rate': 6.048558758244727e-07, 'epoch': 3.96}
{'loss': 0.7049, 'learning_rate': 6.019750775655641e-07, 'epoch': 3.96}
{'loss': 0.6655, 'learning_rate': 5.991011352923237e-07, 'epoch': 3.96}
{'loss': 0.6744, 'learning_rate': 5.962340492030039e-07, 'epoch': 3.96}
{'loss': 0.7434, 'learning_rate': 5.933738194953465e-07, 'epoch': 3.96}
{'loss': 0.6863, 'learning_rate': 5.905204463666269e-07, 'epoch': 3.96}
{'loss': 0.6882, 'learning_rate': 5.876739300136879e-07, 'epoch': 3.96}
{'loss': 0.709, 'learning_rate': 5.848342706328392e-07, 'epoch': 3.96}
{'loss': 0.6821, 'learning_rate': 5.820014684199459e-07, 'epoch': 3.96}
{'loss': 0.6886, 'learning_rate': 5.791755235704188e-07, 'epoch': 3.96}
{'loss': 0.716, 'learning_rate': 5.763564362791796e-07, 'epoch': 3.97}
{'loss': 0.6128, 'learning_rate': 5.735442067406504e-07, 'epoch': 3.97}
{'train_runtime': 69622.7959, 'train_samples_per_second': 43.195, 'train_steps_per_second': 0.172, 'train_loss': 0.7354941845767455, 'epoch': 3.97}
97%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 11564/11972 [19:20:20<36:23, 5.35s/it]
Traceback (most recent call last):
File "/root/miniconda3/envs/py3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/root/miniconda3/envs/py3.10/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/workspace/axolotl/src/axolotl/cli/train.py", line 43, in <module>
fire.Fire(do_cli)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/workspace/axolotl/src/axolotl/cli/train.py", line 39, in do_cli
train(cfg=parsed_cfg, cli_args=parsed_cli_args, dataset_meta=dataset_meta)
File "/workspace/axolotl/src/axolotl/train.py", line 149, in train
trainer.train(resume_from_checkpoint=resume_from_checkpoint)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 1543, in train
return inner_training_loop(
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 1996, in _inner_training_loop
self.control = self.callback_handler.on_train_end(args, self.state, self.control)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer_callback.py", line 373, in on_train_end
return self.call_event("on_train_end", args, state, control)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer_callback.py", line 414, in call_event
result = getattr(callback, event)(
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/integrations/integration_utils.py", line 777, in on_train_end
fake_trainer.save_model(temp_dir)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2836, in save_model
state_dict = self.accelerator.get_state_dict(self.deepspeed)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/accelerate/accelerator.py", line 3043, in get_state_dict
if self.deepspeed_config["zero_optimization"]["stage"] == 3:
AttributeError: 'Accelerator' object has no attribute 'deepspeed_config'
Traceback (most recent call last):
File "/root/miniconda3/envs/py3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/root/miniconda3/envs/py3.10/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/workspace/axolotl/src/axolotl/cli/train.py", line 43, in <module>
fire.Fire(do_cli)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/workspace/axolotl/src/axolotl/cli/train.py", line 39, in do_cli
train(cfg=parsed_cfg, cli_args=parsed_cli_args, dataset_meta=dataset_meta)
File "/workspace/axolotl/src/axolotl/train.py", line 149, in train
trainer.train(resume_from_checkpoint=resume_from_checkpoint)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 1543, in train
return inner_training_loop(
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 1996, in _inner_training_loop
self.control = self.callback_handler.on_train_end(args, self.state, self.control)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer_callback.py", line 373, in on_train_end
return self.call_event("on_train_end", args, state, control)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer_callback.py", line 414, in call_event
result = getattr(callback, event)(
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/integrations/integration_utils.py", line 777, in on_train_end
fake_trainer.save_model(temp_dir)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2836, in save_model
state_dict = self.accelerator.get_state_dict(self.deepspeed)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/accelerate/accelerator.py", line 3043, in get_state_dict
if self.deepspeed_config["zero_optimization"]["stage"] == 3:
AttributeError: 'Accelerator' object has no attribute 'deepspeed_config'
wandb: / 1168.867 MB of 1168.867 MB uploaded (0.071 MB deduped)
wandb: Run history:
wandb: eval/loss █▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb: eval/runtime ▆▅█▆▃▆▁▄▄▄▃▅▆▃▆▆
wandb: eval/samples_per_second ▃▄▁▃▆▃█▅▅▅▆▄▃▆▃▃
wandb: eval/steps_per_second ▃▄▁▃▆▃█▅▅▅▆▄▃▆▃▃
wandb: train/epoch ▁▁▁▁▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███
wandb: train/global_step ▁▁▁▁▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███
wandb: train/learning_rate ████████▇▇▇▇▇▆▆▆▆▅▅▅▅▄▄▄▄▃▃▃▃▂▂▂▂▂▁▁▁▁▁▁
wandb: train/loss ██▅▅▅▆▄▄▄▅▄▃▃▄▄▃▄▃▄▄▁▂▃▃▃▃▄▂▃▃▁▃▂▁▄▂▂▃▂▂ [11/1935]
wandb: train/total_flos ▁
wandb: train/train_loss ▁
wandb: train/train_runtime ▁
wandb: train/train_samples_per_second ▁
wandb: train/train_steps_per_second ▁
wandb:
wandb: Run summary:
wandb: eval/loss 0.71755
wandb: eval/runtime 293.5415
wandb: eval/samples_per_second 134.805
wandb: eval/steps_per_second 16.853
wandb: train/epoch 3.97
wandb: train/global_step 11564
wandb: train/learning_rate 0.0
wandb: train/loss 0.6128
wandb: train/total_flos 1.149283794726735e+19
wandb: train/train_loss 0.73549
wandb: train/train_runtime 69622.7959
wandb: train/train_samples_per_second 43.195
wandb: train/train_steps_per_second 0.172
wandb:
wandb: 🚀 View run tinyllama-instruct at: https://wandb.ai/gardner/tinyllama-instruct/runs/efbcj8ze
wandb: Synced 6 W&B file(s), 0 media file(s), 48 artifact file(s) and 1 other file(s)
wandb: Find logs at: ./wandb/run-20240120_143028-efbcj8ze/logs
^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A[2024-01-21 09:51:04,247] torch.distributed.elastic.multiprocessing.api: [ERROR] f
ailed (exitcode: 1) local_rank: 0 (pid: 422) of binary: /root/miniconda3/envs/py3.10/bin/python3
Traceback (most recent call last):
File "/root/miniconda3/envs/py3.10/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main
args.func(args)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1014, in launch_command
multi_gpu_launcher(args)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/accelerate/commands/launch.py", line 672, in multi_gpu_launcher
distrib_run.run(args)
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/torch/distributed/run.py", line 797, in run
elastic_launch(
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
axolotl.cli.train FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-01-21_09:51:04
host : 681786dbbafd
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 422)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
root@ubuntu:~$ |
This worked for me, removing the wandb logging integration (I removed the line I think #1092 addresses the same problem |
Describe the bug
on_train_end, raise AttributeError: 'Accelerator' object has no attribute 'deepspeed_config'
To Reproduce
None
Expected behavior
A clear and concise description of what you expected to happen.
ds_report output
[2023-08-14 18:02:42,266] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
DeepSpeed C++/CUDA extension op report
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
JIT compiled ops requires ninja
ninja .................. [OKAY]
op name ................ installed .. compatible
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0
[WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
DeepSpeed general environment info:
torch install path ............... ['/home/maojianguo/anaconda3/envs/mjg_torch2.0.1/lib/python3.8/site-packages/torch']
torch version .................... 2.0.1+cu117
deepspeed install path ........... ['/home/maojianguo/anaconda3/envs/mjg_torch2.0.1/lib/python3.8/site-packages/deepspeed']
deepspeed info ................... 0.10.0, unknown, unknown
torch cuda version ............... 11.7
torch hip version ................ None
nvcc version ..................... 11.3
deepspeed wheel compiled w. ...... torch 2.0, cuda 11.7
Screenshots
Traceback (most recent call last):
File "main.py", line 430, in
main()
File "/home/maojianguo/anaconda3/envs/mjg_torch2.0.1/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper
return f(*args, **kwargs)
File "main.py", line 374, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/maojianguo/anaconda3/envs/mjg_torch2.0.1/lib/python3.8/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/home/maojianguo/anaconda3/envs/mjg_torch2.0.1/lib/python3.8/site-packages/transformers/trainer.py", line 1971, in _inner_training_loop
self.control = self.callback_handler.on_train_end(args, self.state, self.control)
File "/home/maojianguo/anaconda3/envs/mjg_torch2.0.1/lib/python3.8/site-packages/transformers/trainer_callback.py", line 356, in on_train_end
return self.call_event("on_train_end", args, state, control)
File "/home/maojianguo/anaconda3/envs/mjg_torch2.0.1/lib/python3.8/site-packages/transformers/trainer_callback.py", line 397, in call_event
result = getattr(callback, event)(
File "/home/maojianguo/anaconda3/envs/mjg_torch2.0.1/lib/python3.8/site-packages/transformers/integrations.py", line 770, in on_train_end
fake_trainer.save_model(temp_dir)
File "/home/maojianguo/anaconda3/envs/mjg_torch2.0.1/lib/python3.8/site-packages/transformers/trainer.py", line 2758, in save_model
state_dict = self.accelerator.get_state_dict(self.deepspeed)
File "/home/maojianguo/anaconda3/envs/mjg_torch2.0.1/lib/python3.8/site-packages/accelerate/accelerator.py", line 2829, in get_state_dict
if self.deepspeed_config["zero_optimization"]["stage"] == 3:
AttributeError: 'Accelerator' object has no attribute 'deepspeed_config'
System info (please complete the following information):
Launcher context
{
"train_micro_batch_size_per_gpu": "auto",
"zero_allow_untested_optimizer": true,
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"initial_scale_power": 16,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 1,
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": false,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"contiguous_gradients" : true,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true,
"buffer_count": 4,
"fast_init": false
}
},
"gradient_accumulation_steps": "auto",
"steps_per_print": "auto",
"bf16": {
"enabled": "auto"
}
}
The text was updated successfully, but these errors were encountered: