We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
To augment Malahanobis distance evaluation ((r/sigma)^2) with blend of a known smooth "robust" psuedo-Huber kernel and nullhypo. Special consideration for loss gradient which is both smooth and never overshoots the minimum during Newton-style optimization. Loss always varies between L1 and L2, with intuitive behavior for given covariance sigma or nullhypo nh. Note separate parameter nh because sigma must be a consistent estimate of inlier variance.
(r/sigma)^2
sigma
nh
using GLMakie using LinearAlgebra ## # https://en.wikipedia.org/wiki/Huber_loss L(r, δ_) = δ_^2 * (sqrt(1 + (r/δ_)^2) - 1) # try: δ = im*r / σ^2, observed with condition such that: Malahanobis = pseudo-Huber: # δ/im = δ' # (δ' - r^2/(σ^2*δ')) = sqrt(δ'^2 - r^2) δ(r, σ; nh=0) = (1-nh)^4*im*r / σ^2 + (nh/(nh+1e-5))/(σ^2) L_(r, σ; nh=0) = L(r, δ(r,σ;nh)) XX = -30:0.1:30 nh = 0.2 f = Figure() ax = Axis(f[1,1]; title=""" L(r, δ_) = δ_^2 * (sqrt(1 + (r/δ_)^2) - 1) # PseudoHuber loss Posit: δ = im*r / σ^2 # dehann01 δ(r, σ; nh=0) = (1-nh)^4*im*r / σ^2 + (nh/(nh+1e-5))/(σ^2) L_(r, σ; nh=0) = L(r, δ(r,σ;nh))""", xgridcolor = :gray, ygridcolor = :gray, xgridwidth = 1, ygridwidth = 1, xminorgridcolor = :gray, yminorgridcolor = :gray, xminorgridvisible = true, yminorgridvisible = true, ) lines!(ax,XX, 1 .+ (XX.^2), color=:cyan, label="σ=1,r^2") lines!(ax,XX, norm.(L_.(XX, 1.0)), color=:blue, label="σ=1,nh=0") lines!(ax,XX, norm.(L_.(XX, 1.0; nh)), color=:red, label="σ=1,nh=$nh") lines!(ax,XX, norm.(L_.(XX, 2.0)), color=:green, label="σ=2,nh=0") lines!(ax,XX, norm.(L_.(XX, 2.0; nh)), color=:magenta, label="σ=2,nh=$nh") lines!(ax,XX, norm.(L_.(XX, 1.0; nh=1.0)), color=:orange, label="σ=1,nh=1.0") lines!(ax,XX, norm.(L_.(XX, 2.0; nh=1.0)), color=:brown, label="σ=2,nh=1.0") axislegend() f
cc @Affie also found this right after: https://arxiv.org/pdf/2004.14938.pdf
The text was updated successfully, but these errors were encountered:
No branches or pull requests
To augment Malahanobis distance evaluation (
(r/sigma)^2
) with blend of a known smooth "robust" psuedo-Huber kernel and nullhypo. Special consideration for loss gradient which is both smooth and never overshoots the minimum during Newton-style optimization. Loss always varies between L1 and L2, with intuitive behavior for given covariancesigma
or nullhyponh
. Note separate parameternh
becausesigma
must be a consistent estimate of inlier variance.cc @Affie also found this right after: https://arxiv.org/pdf/2004.14938.pdf
The text was updated successfully, but these errors were encountered: