Tnlearn is an open source python library. It is based on the symbolic regression algorithm to generate task-based neurons, and then utilizes diverse neurons to build neural networks.
- Quick links
- Motivation
- Features
- Overview
- Benchmarks
- Resource
- Dependences
- Install
- Quick start
- API documentation
- Citation
- The Team
- License
-
NuronAI inspired In the past decade, successful networks have primarily used a single type of neurons within novel architectures, yet recent deep learning studies have been inspired by the diversity of human brain neurons, leading to the proposal of new artificial neuron designs.
-
Task-Based Neuron Design Given the human brain's reliance on task-based neurons, can artificial network design shift from focusing on task-based architecture to task-based neuron design?
-
Enhanced Representation Since there are no universally applicable neurons, task-based neurons could enhance feature representation ability within the same structure, due to the intrinsic inductive bias for the task.
-
Vectorized symbolic regression is employed to find optimal formulas that fit input data.
-
We parameterize the obtained elementary formula to create learnable parameters, serving as the neuron's aggregation function.
A nice picture describing the structure of tnlearn will be produced here.
We select several advanced machine learning methods for comparison.
Method | Venues | Code link |
---|---|---|
XGBoost | ACM SIGKDD 2016 | Adopt official code |
LightGBM | NeurIPS 2017 | Implemented by widedeep |
CatBoost | Journal of big data | Adopt official code |
TabNet | AAAI 2021 | Implemented by widedeep |
Tab Transformer | arxiv | Adopt official code |
FT-Transformer | NeurIPS 2021 | Implemented by widedeep |
DANETs | AAAI 2022 | Adopt official code |
We test multiple advanced machine learning methods on two sets of real-world data. The test results (MSE) are shown in the following table:
Method | Particle collision | Asteroid prediction |
---|---|---|
XGBoost | ||
LightGBM | ||
CatBoost | ||
TabNet | ||
TabTransformer | ||
FT-Transformer | ||
DANETs | ||
Task-based Network |
Here is a resource summary for neuronal diversity in artificial networks.
Resource | Type | Description |
---|---|---|
QuadraLib | Library | The QuadraLib is a library for the efficient optimization and design exploration of quadratic networks.The paper of QuadraLib won MLSys 2022’s best paper award. |
Dr. Fenglei Fan’s GitHub Page | Code | Dr. Fenglei Fan’s GitHub Page summarizes a series of papers and associated code on quadratic networks, including quadratic autoencoder and the training algorithm ReLinear. |
Polynomial Network | Code | This repertoire shows how to build a deep polynomial network and sparsify it with tensor decomposition. |
Dendrite | Book | A comprehensive book covering all aspects of dendritic computation. |
You should ensure that the version of pytorch corresponds to the version of cuda so that gpu acceleration can be guaranteed. Here is a reference version
Pytorch >= 2.1.0
cuda >= 12.1
Other major dependencies are automatically installed when installing tnlearn.
Tnlearn and its dependencies can be easily installed with pip:
pip install tnlearn
Tnlearn and its dependencies can be easily installed with conda:
conda install -c tnlearn
This is a quick example to show you how to use tnlearn in regression tasks. Note that your data types should be tabular data.
from tnlearn import VecSymRegressor
from tnlearn import MLPRegressor
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
# Generate data.
X, y = make_regression(n_samples=200, random_state=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# A vectorized symbolic regression algorithm is used to generate task-based neurons.
neuron = VecSymRegressor()
neuron.fit(X_train, y_train)
# Build neural network using task-based neurons and train it.
clf = MLPRegressor(neurons=neuron.neuron,
layers_list=[50,30,10]) #Specify the structure of the hidden layers in the MLP.
clf.fit(X_train, y_train)
# Predict
clf.predict(X_test)
Another quick example to show you how to use polynomial tensor regressor to build neurons:
from tnlearn import PolynomialTensorRegression
from tnlearn import MLPRegressor
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
# Generate data.
X, y = make_regression(n_samples=200, random_state=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# A polynomial tensor regressor is used to generate task-based neurons.
neuron = PolynomialTensorRegression()
neuron.fit(X_train, y_train)
# Build neural network using task-based neurons and train it.
clf = MLPRegressor(neurons=neuron.neuron,
layers_list=[50,30,10]) #Specify the structure of the hidden layers in the MLP.
clf.fit(X_train, y_train)
# Predict
clf.predict(X_test)
There are many hyperparameters in tnlearn that can be debugged, making the neural network performance more superior. Please see the API documentation for specific usage.
Here's our official API documentation, available on Read the Docs.
If you find Tnlearn useful, please cite it in your publications.
@article{
}
Tnlearn is a work by Meng Wang, Juntong Fan, Hanyu Pei, and Fenglei Fan.
Tnlearn is released under Apache License 2.0.