Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NPE when hash join #8562

Closed
lilinghai opened this issue Dec 22, 2023 · 1 comment · Fixed by #8587
Closed

NPE when hash join #8562

lilinghai opened this issue Dec 22, 2023 · 1 comment · Fixed by #8587
Assignees
Labels
affects-7.5 This bug affects the 7.5.x(LTS) versions. component/compute severity/major type/bug The issue is confirmed as a bug.

Comments

@lilinghai
Copy link

lilinghai commented Dec 22, 2023

Bug Report

Please answer these questions before submitting your issue. Thanks!

1. Minimal reproduce step (Required)

CREATE TABLE `t2` (
  `a` int(11) NOT NULL,
  `b` int(11) DEFAULT NULL,
  PRIMARY KEY (`a`) /*T![clustered_index] CLUSTERED */
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin;
CREATE TABLE `t` (
  `a` int(11) NOT NULL,
  `b` int(11) DEFAULT NULL,
  PRIMARY KEY (`a`) /*T![clustered_index] CLUSTERED */
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin;
alter table t2 set tiflash replica 1;
alter table t set tiflash replica 1;
SELECT count(*) FROM test.t2 left outer join test.t on if(test.t2.a,null,null);
[2023/12/22 10:43:27.319 +08:00] [ERROR] [BaseDaemon.cpp:367] [########################################] [source=BaseDaemon] [thread_id=940]
[2023/12/22 10:43:27.319 +08:00] [ERROR] [BaseDaemon.cpp:368] ["(from thread 19) Received signal Segmentation fault(11)."] [source=BaseDaemon] [thread_id=940]
[2023/12/22 10:43:27.319 +08:00] [ERROR] [BaseDaemon.cpp:396] ["Address: NULL pointer."] [source=BaseDaemon] [thread_id=940]
[2023/12/22 10:43:27.319 +08:00] [ERROR] [BaseDaemon.cpp:404] ["Access: read."] [source=BaseDaemon] [thread_id=940]
[2023/12/22 10:43:27.319 +08:00] [ERROR] [BaseDaemon.cpp:413] ["Address not mapped to object."] [source=BaseDaemon] [thread_id=940]
[2023/12/22 10:43:27.336 +08:00] [ERROR] [BaseDaemon.cpp:560] ["\n       0x77b4801\tfaultSignalHandler(int, siginfo_t*, void*) [tiflash+125519873]\n                \tlibs/libdaemon/src/BaseDaemon.cpp:211\n  0x7f9950c67db0\t<unknown symbol> [libc.so.6+347568]\n       0x7efd703\tDB::recordFilteredRows(DB::Block const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, COWPtr<DB::IColumn>::immutable_ptr<DB::IColumn>&, DB::PODArray<unsigned char, 4096ul, Allocator<false>, 15ul, 16ul> const*&) [tiflash+133158659]\n                \tdbms/src/Interpreters/JoinUtils.cpp:88\n       0x7f0ae0e\tDB::ProbeProcessInfo::prepareForCrossProbe(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::ASTTableJoin::Kind, DB::ASTTableJoin::Strictness, DB::Block const&, unsigned long, DB::CrossProbeMode, unsigned long) [tiflash+133213710]\n                \tdbms/src/Interpreters/ProbeProcessInfo.cpp:122\n       0x7c6f98b\tDB::Join::joinBlock(DB::ProbeProcessInfo&, bool) const [tiflash+130480523]\n                \tdbms/src/Interpreters/Join.cpp:2143\n       0x88ee3a4\tDB::HashJoinProbeTransformOp::transformHeaderImpl(DB::Block&) [tiflash+143582116]\n                \tdbms/src/Operators/HashJoinProbeTransformOp.cpp:61\n       0x89bc75f\tDB::PipelineExecBuilder::appendTransformOp(std::__1::unique_ptr<DB::TransformOp, std::__1::default_delete<DB::TransformOp> >&&) [tiflash+144426847]\n                \tdbms/src/Flash/Pipeline/Exec/PipelineExecBuilder.cpp:28\n       0x8a64805\tDB::PhysicalJoinProbe::buildPipelineExecGroupImpl(DB::PipelineExecutorContext&, DB::PipelineExecGroupBuilder&, DB::Context&, unsigned long) [tiflash+145115141]\n                \tdbms/src/Flash/Planner/Plans/PhysicalJoinProbe.cpp:50\n       0x89f45ee\tDB::PhysicalPlanNode::buildPipelineExecGroup(DB::PipelineExecutorContext&, DB::PipelineExecGroupBuilder&, DB::Context&, unsigned long) [tiflash+144655854]\n                \tdbms/src/Flash/Planner/PhysicalPlanNode.cpp:105\n       0x89b1aaf\tDB::Pipeline::buildExecGroup(DB::PipelineExecutorContext&, DB::Context&, unsigned long) [tiflash+144382639]\n                \tdbms/src/Flash/Pipeline/Pipeline.cpp:202\n       0x89cc532\tDB::PlainPipelineEvent::scheduleImpl() [tiflash+144491826]\n                \tdbms/src/Flash/Pipeline/Schedule/Events/PlainPipelineEvent.cpp:24\n       0x89c56f2\tDB::Event::schedule() [tiflash+144463602]\n                \tdbms/src/Flash/Pipeline/Schedule/Events/Event.cpp:132\n       0x89c678c\tDB::Event::finish() [tiflash+144467852]\n                \tdbms/src/Flash/Pipeline/Schedule/Events/Event.cpp:193\n       0x89d52b0\tDB::EventTask::finalizeImpl() [tiflash+144528048]\n                \tdbms/src/Flash/Pipeline/Schedule/Tasks/EventTask.cpp:41\n       0x89db3e2\tDB::Task::finalize() [tiflash+144552930]\n                \tdbms/src/Flash/Pipeline/Schedule/Tasks/Task.cpp:162\n       0x1eb407b\tDB::TaskThreadPool<DB::CPUImpl>::loop(unsigned long) [tiflash+32194683]\n                \tdbms/src/Flash/Pipeline/Schedule/ThreadPool/TaskThreadPool.cpp:61\n       0x1eb47c6\tvoid* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (DB::TaskThreadPool<DB::CPUImpl>::*)(unsigned long), DB::TaskThreadPool<DB::CPUImpl>*, unsigned long> >(void*) [tiflash+32196550]\n                \t/usr/local/bin/../include/c++/v1/thread:291\n  0x7f9950cb2802\tstart_thread [libc.so.6+653314]"] [source=BaseDaemon] [thread_id=940]

2. What did you expect to see? (Required)

3. What did you see instead (Required)

4. What is your TiFlash version? (Required)

master

@yibin87
Copy link
Contributor

yibin87 commented Dec 25, 2023

Reproduced on local machine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
affects-7.5 This bug affects the 7.5.x(LTS) versions. component/compute severity/major type/bug The issue is confirmed as a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants