Skip to content
Snippets Groups Projects
  1. Dec 20, 2017
  2. Dec 18, 2017
    • Kirill Yukhin's avatar
      sql: Remove dead opcodes, including table locking · f92bc1e6
      Kirill Yukhin authored
      Removed OP_ParseSchema, OP_TableLock.
      All other dead code connected to locking of open tables (for both
      reading and writing) removed as well.
      Remove auth.c file, it is not used anymore.
      f92bc1e6
    • Kirill Yukhin's avatar
      sql: emove idb operand from VM opcodes · daa00ab8
      Kirill Yukhin authored
      Few opcodes like OpenRead/OpenWrite exepcted 3rd argument to be DB
      descriptor. Now there's a single DB backend and there's no need
      for any descriptors. Remove from VDBE, fix code generator, update
      opcode descriptions.
      
      Closes #2855
      daa00ab8
  3. Dec 15, 2017
  4. Dec 08, 2017
  5. Dec 06, 2017
    • Nikita Pettik's avatar
      sql: fail of DROP TABLE doesn't lead to truncate · 41ca410c
      Nikita Pettik authored
      Deletion of rows from table (which is part of DROP TABLE procedure) is
      wrapped in transaction now. If any Foreign Key violations occur, then
      rollback transaction and halt execution of statement. Otherwise, commit
      changes and continue DROP TABLE routine. This logic is implemented by
      OP_FkCheckCommit opcode.
      Added tests for this case.
      
      Closes #2953
      41ca410c
  6. Dec 05, 2017
    • khatskevich's avatar
      remove sqlite's likely/SWAP completely · 2d38c663
      khatskevich authored
        There were analogies of those functions in Tarantool. This commit makes
      SQL reuse them.
        All crutches (undef) are deleted.
      2d38c663
    • Nikita Pettik's avatar
      sql: enable alter rename table · 9faea8e1
      Nikita Pettik authored
      Added new opcode OP_RenameTable, which implements
      SQL statment: ALTER TABLE old_name RENAME TO new_name;
      The main idea is to replace corresponding tuple in _space.
      New tuple will contain new name and new SQL statement, which
      creates this table. If it is parent table for foreign key,
      update SQL statement of child table. Then, remove old name
      from hash table and by calling callback function from VDBE,
      update database schema with new table name. After all,
      in the same way update triggers' statements in _trigger table,
      if any exist.
      
      Closes #2204
      9faea8e1
  7. Nov 30, 2017
  8. Nov 29, 2017
  9. Nov 27, 2017
    • khatskevich's avatar
      sql: reload table with unique constraint · c47e3648
      khatskevich authored
          For some reason 'tnum' attribute for pk was set in convertToWithoutRowid
      function, while for other indexes it was set in sqlite3CreateIndex. The if
      clause in sqlite3CreateIndex was filtering not only PK, but inline unique
      indexes too.
      
          Now 'tnum' is always set in sqlite3CreateIndex procedure.
      
      Closes #2808
      c47e3648
    • Ilya's avatar
      digest: add pbkdf2 hashing · 93980aef
      Ilya authored
      * Add pbkdf2 hashing API
      * Wrapper of OpenSSL
      
      Closes #2874
      93980aef
    • Vladimir Davydov's avatar
      vinyl: fix crash in vy_read_iterator_restore_mem · 44b4d7ec
      Vladimir Davydov authored
      vy_read_iterator_restore_mem() is called after a yield caused by a disk
      read to restore the position of the iterator over the active in-memory
      tree. It assumes that if a statement inserted into the active in-memory
      tree during the yield is equal to the statement at which the read
      iterator is positioned now (curr_stmt) by key but is older in terms of
      LSN, then the iterator must be positioned at a txw statement. However,
      the iterator could be positioned at an uncommitted statement stored in
      the cache before the yield. We don't restore the cache iterator after a
      yield, so if the new statement has been committed, its LSN will be less
      than the LSN of the uncommitted statement stored in the cache although
      it is indeed newer. This results in an assertion failure:
      
        vy_read_iterator.c:421: vy_read_iterator_restore_mem: Assertion `itr->curr_src == itr->txw_src' failed.
      
      To fix this, let's modify the code checking if the iterator should be
      repositioned to the active in-memory tree (mem_src) after a yield:
      instead of comparing statement LSNs, let's reposition the iterator
      unless it is currently positioned at a txw statement as it is the only
      case when curr_stmt can be newer than the newly inserted statement.
      
      Closes #2926
      44b4d7ec
    • khatskevich's avatar
      make coll_cmp_f attibutes to be const · 4eeab69a
      khatskevich authored
        It is not very important change, but implementing collations in SQL required
      to either delete const modificator from some functions in SQL or make
      coll_cmp_f arguments to be const. The second option were chosen because it
      makes more sense.
      4eeab69a
    • khatskevich's avatar
      rename func coll_cache_find -> coll_by_id · dcda29c9
      khatskevich authored
        This renaming is important, because searching collations by name is needed
      for 1.8 branch. In 1.8 there are two functions for searching collation:
        - coll_cache_id
        - coll_cache_name
      
        Renaming is also important because new format is closer to similar functions
      in other modules.
      dcda29c9
    • Vladimir Davydov's avatar
      Extend fio Lua API · a0fcaa88
      Vladimir Davydov authored
      In order to use fio in conjunction with ibuf, we need to extend read(),
      pread(), write(), pwrite() so that they can take a C buffer instead of
      a Lua string. The syntax is as follows:
      
        read(size) -> str
        read(buf, size) -> len
      
        pread(size, offset) -> str
        pread(buf, size, offset) -> len
      
        write(str)
        write(buf, size)
      
        pwrite(str, offset)
        pwrite(buf, size, offset)
      
      See #2755
      a0fcaa88
    • Nikita Pettik's avatar
      sql: removed code for Windows and obsolete OS · c59efdac
      Nikita Pettik authored
      Deleted all system code connected with Windows and esoteric OS,
      including vxworks, dragonfly etc. Now only UNIX systems are supported.
      
      Closes #2876
      c59efdac
  10. Nov 21, 2017
  11. Nov 18, 2017
  12. Nov 17, 2017
    • Vladimir Davydov's avatar
      vinyl: fix LSN assignment for indexes received during initial join · 8c48aa47
      Vladimir Davydov authored
      Since commit 7d67ec8a ("box: generate LSNs for rows received during
      initial join"), we assign fake, monotonically growing LSNs to all
      records received during initial join, because Vinyl requires all indexes
      to have a unique LSN for identification in vylog. The problem is there's
      a number of records (about 60) that have LSN 0 on the master - these are
      bootstrap records. So if there are N such records, the index that has
      LSN X on the master will have LSN (N + X) on the replica if sent during
      initial join. But there may be another index with the same LSN on the
      master (i.e. N + X). If this index is sent on final join or subscribe
      stage, the replica will fail to make a checkpoint or recover, spitting
      an error message similar to the one below:
      
        coio vy_log.c:2002 E> failed to process vylog record: create_index{index_lsn=68, space_id=12345, key_def=[0, 'unsigned'], }
        main/101/replica vy_log.c:1446 E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Duplicate index id 68
        main/101/replica vy_log.c:2117 E> failed to load `./00000000000000000000.vylog'
        main/101/replica F> failed to create a checkpoint
      
      Presently, the problem is 100% reproducible by
      
        test_run.py --conf vinyl engine/replica_join
      
      To fix it, let's revert the aforementioned commit and instead assign
      fake LSNs only to vinyl records, not to all records received during
      initial join - after all it's vinyl that needs this, not memtx.
      
      Actually, we already use the same technique for DML records in vinyl -
      see vy_join_ctx::lsn - except there we assign fake LSNs on the master's
      side. This was done for historical reasons - we could as well assign
      them on the replica, but that would require some refactoring done later.
      So let's use the same fake LSN counter for both cases.
      8c48aa47
    • Vladimir Davydov's avatar
      vinyl: improve logging in vylog · b0f063a0
      Vladimir Davydov authored
       - Log debug messages at VERBOSE log level. Currently, the only way to
         debug a vylog failure in production is enabling DEBUG log level, but
         that basically floods the log with tons of not so important messages.
         There are not that many vylog messages so it should be OK to log them
         with say_verbose().
      
       - Report the filename on failure to load or save a vylog in order to
         simplify identification of the corrupted file.
      
       - Log some critical errors, such as error processing a vylog record or
         failure to flush the vylog for recovery.
      
       - Add sentinel messages for log rotation, saving, and loading, logged
         at VERBOSE level. This will help matching dumped log records to high
         level operations that emitted them.
      
       - Remove debug logging from the vy_recovery callback, which is invoked
         by vy_recovery_iterate() and vy_recovery_load_index(), as it is a
         responsibility of the caller to write log messages there, not of
         the vylog internal implementation. Besides, dumping all replayed
         records there is not really necessary as they are dumped while vylog
         is loaded anyway (see vy_recovery_new()), which should be enough for
         debugging.
      b0f063a0
    • Vladimir Davydov's avatar
      vinyl: skip uniqueness check during recovery · e0c76280
      Vladimir Davydov authored
      During recovery we apply rows that were successfully applied either
      locally before restart or on the master so conflicts are impossible.
      We already skip the uniqueness check for primary indexes in this case.
      Let's skip it for secondary indexes as well.
      
      Closes #2099
      e0c76280
    • Vladimir Davydov's avatar
      vinyl: discard tautological DELETEs on compaction · a6f45d87
      Vladimir Davydov authored
      The write iterator never discards DELETE statements referenced by a read
      view unless it is major compaction. However, a DELETE is useless in case
      it is preceded by another DELETE for the same key. Let's skip such
      tautological DELETEs. It is not only a useful optimization on its own -
      it will also help us annihilate INSERT+DELETE pairs on compaction.
      
      Needed for #2875
      a6f45d87
  13. Nov 16, 2017
Loading