Skip to content

Commit

Permalink
CI: add multi-GPU (#585)
Browse files Browse the repository at this point in the history
* ci: azure

* cv

* ci

* testing

* v2

* docs

* size

* skip
  • Loading branch information
Borda authored Mar 10, 2021
1 parent 41ebb3e commit 435d8d4
Show file tree
Hide file tree
Showing 25 changed files with 329 additions and 377 deletions.
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,14 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).


## [0.3.2] - 2021-03-DD

### Changed

- Renamed SSL modules: `CPCV2` >> `CPC_v2` and `MocoV2` >> `Moco_v2` ([#585](https://github.com/PyTorchLightning/lightning-bolts/pull/585))


## [0.3.1] - 2021-03-09

### Added
Expand Down
37 changes: 20 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@

[![PyPI Status](https://badge.fury.io/py/lightning-bolts.svg)](https://badge.fury.io/py/lightning-bolts)
[![PyPI Status](https://pepy.tech/badge/lightning-bolts)](https://pepy.tech/project/lightning-bolts)
[![Build Status](https://dev.azure.com/PytorchLightning/lightning%20Bolts/_apis/build/status/PyTorchLightning.lightning-bolts?branchName=master)](https://dev.azure.com/PytorchLightning/lightning%20Bolts/_build/latest?definitionId=5&branchName=master)
[![codecov](https://codecov.io/gh/PyTorchLightning/lightning-bolts/branch/master/graph/badge.svg)](https://codecov.io/gh/PyTorchLightning/lightning-bolts)
[![CodeFactor](https://www.codefactor.io/repository/github/pytorchlightning/lightning-bolts/badge)](https://www.codefactor.io/repository/github/pytorchlightning/lightning-bolts)

Expand All @@ -37,37 +38,39 @@

## Continuous Integration

<center>
<details>
<summary>CI testing</summary>

| System / PyTorch ver. | 1.6 (min. req.) | 1.8 (latest) |
| :---: | :---: | :---: |
| Linux py3.{6,8} | ![CI full testing](https://github.com/PyTorchLightning/lightning-bolts/workflows/CI%20full%20testing/badge.svg?branch=master&event=push) | ![CI full testing](https://github.com/PyTorchLightning/lightning-bolts/workflows/CI%20full%20testing/badge.svg?branch=master&event=push) |
| OSX py3.{6,8} | ![CI full testing](https://github.com/PyTorchLightning/lightning-bolts/workflows/CI%20full%20testing/badge.svg?branch=master&event=push) | ![CI full testing](https://github.com/PyTorchLightning/lightning-bolts/workflows/CI%20full%20testing/badge.svg?branch=master&event=push) |
| Windows py3.7* | ![CI base testing](https://github.com/PyTorchLightning/lightning-bolts/workflows/CI%20base%20testing/badge.svg?branch=master&event=push) | ![CI base testing](https://github.com/PyTorchLightning/lightning-bolts/workflows/CI%20base%20testing/badge.svg?branch=master&event=push) |

</center>

- _\* testing just the package itself, we skip full test suite - excluding `tests` folder_

</details>

## Install

<details>
<summary>View install</summary>

Simple installation from PyPI
```bash
pip install lightning-bolts
```

Install bleeding-edge (no guarantees)
```bash
pip install git+https://github.com/PytorchLightning/lightning-bolts.git@master --upgrade
```

In case you want to have full experience you can install all optional packages at once
```bash
pip install lightning-bolts["extra"]
```
Simple installation from PyPI
```bash
pip install lightning-bolts
```

Install bleeding-edge (no guarantees)
```bash
pip install git+https://github.com/PytorchLightning/lightning-bolts.git@master --upgrade
```

In case you want to have full experience you can install all optional packages at once
```bash
pip install lightning-bolts["extra"]
```

</details>

## What is Bolts
Expand Down
67 changes: 67 additions & 0 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# Python package
# Create and test a Python package on multiple Python versions.
# Add steps that analyze code, save the dist with the build record, publish to a PyPI-compatible index, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/python

trigger:
tags:
include:
- '*'
branches:
include:
- master
- release/*
- refs/tags/*
pr:
- master
- release/*

jobs:
- job: pytest
# how long to run the job before automatically cancelling
timeoutInMinutes: 45
# how much time to give 'run always even if cancelled tasks' before stopping them
cancelTimeoutInMinutes: 2

pool: gridai-spot-pool

# ToDo: this need to have installed docker in the base image...
#container: "pytorchlightning/pytorch_lightning:base-cuda-py$[ variables['python.version'] ]-torch1.6"
container:
image: "pytorchlightning/pytorch_lightning:base-cuda-py3.7-torch1.6"
#endpoint: azureContainerRegistryConnection
options: "--runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=all"

workspace:
clean: all

steps:

- bash: |
lspci | egrep 'VGA|3D'
whereis nvidia
nvidia-smi
python --version
pip --version
pip list
displayName: 'Image info & NVIDIA'
- bash: |
python -c "import torch ; mgpu = torch.cuda.device_count() ; assert mgpu >= 2, f'GPU: {mgpu}'"
displayName: 'Sanity check'
- bash: |
# python -m pip install "pip==20.1"
pip install --requirement ./requirements/devel.txt --upgrade-strategy only-if-needed
pip list
displayName: 'Install dependencies'
- bash: |
python -m coverage run --source pl_bolts -m pytest pl_bolts tests -v --durations=30
displayName: 'Testing'
- bash: |
python -m coverage report
python -m coverage xml
codecov --token=$(CODECOV_TOKEN) --flags=gpu,pytest --name="GPU-coverage" --env=linux,azure
displayName: 'Statistics'
12 changes: 6 additions & 6 deletions docs/source/introduction_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ All models are tested (daily), benchmarked, documented and work on CPUs, TPUs, G
from pl_bolts.models import VAE
from pl_bolts.models.vision import GPT2, ImageGPT, PixelCNN
from pl_bolts.models.self_supervised import AMDIM, CPCV2, SimCLR, MocoV2
from pl_bolts.models.self_supervised import AMDIM, CPC_v2, SimCLR, Moco_v2
from pl_bolts.models import LinearRegression, LogisticRegression
from pl_bolts.models.gans import GAN
from pl_bolts.callbacks import PrintTableMetricsCallback
Expand Down Expand Up @@ -149,15 +149,15 @@ For example, you could use a pretrained VAE to generate features for an image da
.. testcode::

from pl_bolts.models.autoencoders import VAE
from pl_bolts.models.self_supervised import CPCV2
from pl_bolts.models.self_supervised import CPC_v2

model1 = VAE(input_height=32, pretrained='imagenet2012')
encoder = model1.encoder
encoder.eval()

# bolts are pretrained on different datasets
model2 = CPCV2(encoder='resnet18', pretrained='imagenet128').freeze()
model3 = CPCV2(encoder='resnet18', pretrained='stl10').freeze()
model2 = CPC_v2(encoder='resnet18', pretrained='imagenet128').freeze()
model3 = CPC_v2(encoder='resnet18', pretrained='stl10').freeze()

.. code-block:: python
Expand All @@ -178,7 +178,7 @@ you can use any finetuning protocol you prefer.
.. code-block:: python
# unfrozen finetune
model = CPCV2(encoder='resnet18', pretrained='imagenet128')
model = CPC_v2(encoder='resnet18', pretrained='imagenet128')
resnet18 = model.encoder
# don't call .freeze()
Expand All @@ -193,7 +193,7 @@ you can use any finetuning protocol you prefer.
.. code-block:: python
# FREEZE!
model = CPCV2(encoder='resnet18', pretrained='imagenet128')
model = CPC_v2(encoder='resnet18', pretrained='imagenet128')
resnet18 = model.encoder
resnet18.eval()
Expand Down
6 changes: 3 additions & 3 deletions docs/source/models_howto.rst
Original file line number Diff line number Diff line change
Expand Up @@ -358,10 +358,10 @@ approaches.

.. testcode::

from pl_bolts.models.self_supervised import AMDIM, CPCV2
from pl_bolts.models.self_supervised import AMDIM, CPC_v2

default_amdim_task = AMDIM().contrastive_task
model = CPCV2(contrastive_task=default_amdim_task, encoder='cpc_default')
model = CPC_v2(contrastive_task=default_amdim_task, encoder='cpc_default')
# you might need to modify the cpc encoder depending on what you use

.. testoutput::
Expand Down Expand Up @@ -389,7 +389,7 @@ pieces together
self.gan = GAN()
self.vae = VAE()
self.amdim = AMDIM()
self.cpc = CPCV2
self.cpc = CPC_v2

def training_step(self, batch, batch_idx):
(x, y) = batch
Expand Down
28 changes: 14 additions & 14 deletions docs/source/self_supervised_models.rst
Original file line number Diff line number Diff line change
Expand Up @@ -69,11 +69,11 @@ Mix and match any part, or subclass to create your own new method

.. code-block:: python
from pl_bolts.models.self_supervised import CPCV2
from pl_bolts.models.self_supervised import CPC_v2
from pl_bolts.losses.self_supervised_learning import FeatureMapContrastiveTask
amdim_task = FeatureMapContrastiveTask(comparisons='01, 11, 02', bidirectional=True)
model = CPCV2(contrastive_task=amdim_task)
model = CPC_v2(contrastive_task=amdim_task)
-----------------

Expand Down Expand Up @@ -114,7 +114,7 @@ Model implemented by:
To Train::

import pytorch_lightning as pl
from pl_bolts.models.self_supervised import CPCV2
from pl_bolts.models.self_supervised import CPC_v2
from pl_bolts.datamodules import CIFAR10DataModule
from pl_bolts.models.self_supervised.cpc import (
CPCTrainTransformsCIFAR10, CPCEvalTransformsCIFAR10)
Expand All @@ -125,7 +125,7 @@ To Train::
dm.val_transforms = CPCEvalTransformsCIFAR10()

# model
model = CPCV2()
model = CPC_v2()

# fit
trainer = pl.Trainer()
Expand Down Expand Up @@ -186,10 +186,10 @@ Results in table are reported from the
CIFAR-10 pretrained model::

from pl_bolts.models.self_supervised import CPCV2
from pl_bolts.models.self_supervised import CPC_v2

weight_path = 'https://pl-bolts-weights.s3.us-east-2.amazonaws.com/cpc/cpc-cifar10-v4-exp3/epoch%3D474.ckpt'
cpc_v2 = CPCV2.load_from_checkpoint(weight_path, strict=False)
cpc_v2 = CPC_v2.load_from_checkpoint(weight_path, strict=False)

cpc_v2.freeze()

Expand All @@ -215,10 +215,10 @@ Fine-tuning:
STL-10 pretrained model::

from pl_bolts.models.self_supervised import CPCV2
from pl_bolts.models.self_supervised import CPC_v2

weight_path = 'https://pl-bolts-weights.s3.us-east-2.amazonaws.com/cpc/cpc-stl10-v0-exp3/epoch%3D624.ckpt'
cpc_v2 = CPCV2.load_from_checkpoint(weight_path, strict=False)
cpc_v2 = CPC_v2.load_from_checkpoint(weight_path, strict=False)

cpc_v2.freeze()

Expand All @@ -242,16 +242,16 @@ Fine-tuning:

|
CPCV2 API
*********
CPC (v2) API
^^^^^^^^^^^^

.. autoclass:: pl_bolts.models.self_supervised.CPCV2
.. autoclass:: pl_bolts.models.self_supervised.CPC_v2
:noindex:

Moco (V2)
^^^^^^^^^
Moco (v2) API
^^^^^^^^^^^^^

.. autoclass:: pl_bolts.models.self_supervised.MocoV2
.. autoclass:: pl_bolts.models.self_supervised.Moco_v2
:noindex:

SimCLR
Expand Down
1 change: 0 additions & 1 deletion pl_bolts/datasets/imagenet_dataset.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import gzip
import hashlib
import os
import sys
import shutil
import sys # noqa F401
import tarfile
Expand Down
12 changes: 6 additions & 6 deletions pl_bolts/models/self_supervised/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@
.. code-block ::
from pl_bolts.models.self_supervised import CPCV2
from pl_bolts.models.self_supervised import CPC_v2
images = get_imagenet_batch()
# extract unsupervised representations
pretrained = CPCV2(pretrained=True)
pretrained = CPC_v2(pretrained=True)
representations = pretrained(images)
# use these in classification or any downstream task
Expand All @@ -20,9 +20,9 @@
"""
from pl_bolts.models.self_supervised.amdim.amdim_module import AMDIM # noqa: F401
from pl_bolts.models.self_supervised.byol.byol_module import BYOL # noqa: F401
from pl_bolts.models.self_supervised.cpc.cpc_module import CPCV2 # noqa: F401
from pl_bolts.models.self_supervised.cpc.cpc_module import CPC_v2 # noqa: F401
from pl_bolts.models.self_supervised.evaluator import SSLEvaluator # noqa: F401
from pl_bolts.models.self_supervised.moco.moco2_module import MocoV2 # noqa: F401
from pl_bolts.models.self_supervised.moco.moco2_module import Moco_v2 # noqa: F401
from pl_bolts.models.self_supervised.simclr.simclr_module import SimCLR # noqa: F401
from pl_bolts.models.self_supervised.simsiam.simsiam_module import SimSiam # noqa: F401
from pl_bolts.models.self_supervised.ssl_finetuner import SSLFineTuner # noqa: F401
Expand All @@ -31,9 +31,9 @@
__all__ = [
"AMDIM",
"BYOL",
"CPCV2",
"CPC_v2",
"SSLEvaluator",
"MocoV2",
"Moco_v2",
"SimCLR",
"SimSiam",
"SSLFineTuner",
Expand Down
Loading

0 comments on commit 435d8d4

Please sign in to comment.