Changes in 4.19.9
media: vicodec: lower minimum height to 360
media: cec: check for non-OK/NACK conditions while claiming a LA
media: omap3isp: Unregister media device as first
media: ipu3-cio2: Unregister device nodes first, then release resources
iommu/vt-d: Fix NULL pointer dereference in prq_event_thread()
brcmutil: really fix decoding channel info for 160 MHz bandwidth
mt76: fix building without CONFIG_LEDS_CLASS
iommu/ipmmu-vmsa: Fix crash on early domain free
scsi: ufs: Fix hynix ufs bug with quirk on hi36xx SoC
can: ucan: remove set but not used variable 'udev'
can: rcar_can: Fix erroneous registration
test_firmware: fix error return getting clobbered
HID: input: Ignore battery reported by Symbol DS4308
batman-adv: Use explicit tvlv padding for ELP packets
batman-adv: Expand merged fragment buffer for full packet
amd/iommu: Fix Guest Virtual APIC Log Tail Address Register
bnx2x: Assign unique DMAE channel number for FW DMAE transactions.
qed: Fix PTT leak in qed_drain()
qed: Fix overriding offload_tc by protocols without APP TLV
qed: Fix rdma_info structure allocation
qed: Fix reading wrong value in loop condition
usb: dwc2: pci: Fix an error code in probe
Revert "usb: gadget: ffs: Fix BUG when userland exits with submitted AIO transfers"
s390/ism: clear dmbe_mask bit before SMC IRQ handling
nvme-fc: resolve io failures during connect
bnxt_en: Fix filling time in bnxt_fill_coredump_record()
drm/amdgpu: Add amdgpu "max bpc" connector property (v2)
drm/amd/display: Support amdgpu "max bpc" connector property (v2)
net/mlx4_core: Zero out lkey field in SW2HW_MPT fw command
net/mlx4_core: Fix uninitialized variable compilation warning
net/mlx4: Fix UBSAN warning of signed integer overflow
drivers/net/ethernet/qlogic/qed/qed_rdma.h: fix typo
gpio: pxa: fix legacy non pinctrl aware builds again
gpio: mockup: fix indicated direction
tc-testing: tdc.py: ignore errors when decoding stdout/stderr
tc-testing: tdc.py: Guard against lack of returncode in executed command
mtd: rawnand: qcom: Namespace prefix some commands
cpufreq: ti-cpufreq: Only register platform_device when supported
Revert "HID: uhid: use strlcpy() instead of strncpy()"
HID: multitouch: Add pointstick support for Cirque Touchpad
mtd: spi-nor: Fix Cadence QSPI page fault kernel panic
net: ena: fix crash during failed resume from hibernation
NFSv4: Fix a NFSv4 state manager deadlock
qed: Fix bitmap_weight() check
qed: Fix QM getters to always return a valid pq
net/ibmnvic: Fix deadlock problem in reset
riscv: fix warning in arch/riscv/include/asm/module.h
net: faraday: ftmac100: remove netif_running(netdev) check before disabling interrupts
iommu/vt-d: Use memunmap to free memremap
NFSv4.2 copy do not allocate memory under the lock
flexfiles: use per-mirror specified stateid for IO
ibmvnic: Fix RX queue buffer cleanup
ibmvnic: Update driver queues after change in ring size support
team: no need to do team_notify_peers or team_mcast_rejoin when disabling port
net: amd: add missing of_node_put()
usb: quirk: add no-LPM quirk on SanDisk Ultra Flair device
usb: appledisplay: Add 27" Apple Cinema Display
USB: check usb_get_extra_descriptor for proper size
USB: serial: console: fix reported terminal settings
ALSA: usb-audio: Add SMSL D1 to quirks for native DSD support
ALSA: usb-audio: Fix UAF decrement if card has no live interfaces in card.c
ALSA: hda: Add support for AMD Stoney Ridge
ALSA: pcm: Fix starvation on down_write_nonblock()
ALSA: pcm: Call snd_pcm_unlink() conditionally at closing
ALSA: pcm: Fix interval evaluation with openmin/max
ALSA: hda/realtek - Fix speaker output regression on Thinkpad T570
ALSA: hda/realtek: ALC286 mic and headset-mode fixups for Acer Aspire U27-880
ALSA: hda/realtek - Add support for Acer Aspire C24-860 headset mic
ALSA: hda/realtek: Fix mic issue on Acer AIO Veriton Z4660G
ALSA: hda/realtek: Fix mic issue on Acer AIO Veriton Z4860G/Z6860G
media: gspca: fix frame overflow error
media: vicodec: fix memchr() kernel oops
media: dvb-pll: fix tuner frequency ranges
media: dvb-pll: don't re-validate tuner frequencies
Revert "mfd: cros_ec: Use devm_kzalloc for private data"
parisc: Enable -ffunction-sections for modules on 32-bit kernel
virtio/s390: avoid race on vcdev->config
virtio/s390: fix race in ccw_io_helper()
vhost/vsock: fix use-after-free in network stack callers
arm64: hibernate: Avoid sending cross-calling with interrupts disabled
SUNRPC: Fix leak of krb5p encode pages
dmaengine: dw: Fix FIFO size for Intel Merrifield
Revert "dmaengine: imx-sdma: Use GFP_NOWAIT for dma allocations"
Revert "dmaengine: imx-sdma: alloclate bd memory from dma pool"
dmaengine: imx-sdma: implement channel termination via worker
dmaengine: imx-sdma: use GFP_NOWAIT for dma descriptor allocations
dmaengine: cppi41: delete channel from pending list when stop channel
ARM: 8806/1: kprobes: Fix false positive with FORTIFY_SOURCE
xhci: workaround CSS timeout on AMD SNPS 3.0 xHC
xhci: Prevent U1/U2 link pm states if exit latency is too long
arm64: dts: rockchip: remove vdd_log from rock960 to fix a stability issues
Revert "x86/e820: put !E820_TYPE_RAM regions into memblock.reserved"
cifs: Fix separator when building path from dentry
staging: rtl8712: Fix possible buffer overrun
Revert commit ef9209b642 "staging: rtl8723bs: Fix indenting errors and an off-by-one mistake in core/rtw_mlme_ext.c"
crypto: do not free algorithm before using
drm/amdgpu: update mc firmware image for polaris12 variants
drm/lease: Send a distinct uevent
drm/msm: Move fence put to where failure occurs
drm/amdgpu/gmc8: update MC firmware for polaris
drm/amdgpu/gmc8: always load MC firmware in the driver
drm/i915: Downgrade Gen9 Plane WM latency error
kprobes/x86: Fix instruction patching corruption when copying more than one RIP-relative instruction
x86/efi: Allocate e820 buffer before calling efi_exit_boot_service
Drivers: hv: vmbus: Offload the handling of channels to two workqueues
tty: serial: 8250_mtk: always resume the device in probe.
tty: do not set TTY_IO_ERROR flag if console port
gnss: sirf: fix activation retry handling
kgdboc: fix KASAN global-out-of-bounds bug in param_set_kgdboc_var()
libnvdimm, pfn: Pad pfn namespaces relative to other regions
cfg80211: Fix busy loop regression in ieee80211_ie_split_ric()
mac80211_hwsim: Timer should be initialized before device registered
mac80211: fix GFP_KERNEL under tasklet context
mac80211: Clear beacon_int in ieee80211_do_stop
mac80211: ignore tx status for PS stations in ieee80211_tx_status_ext
mac80211: fix reordering of buffered broadcast packets
mac80211: ignore NullFunc frames in the duplicate detection
HID: quirks: fix RetroUSB.com devices
Linux 4.19.9
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
commit e5bde04ccce64d808f8b00a489a1fe5825d285cb upstream.
In multiple functions, the algorithm fields are read after its reference
is dropped through crypto_mod_put. In this case, the algorithm memory
may be freed, resulting in use-after-free bugs. This patch delays the
put operation until the algorithm is never used.
Fixes: 79c65d179a ("crypto: cbc - Convert to skcipher")
Fixes: a7d85e06ed ("crypto: cfb - add support for Cipher FeedBack mode")
Fixes: 043a44001b ("crypto: pcbc - Convert to skcipher")
Cc: <stable@vger.kernel.org>
Signed-off-by: Pan Bian <bianpan2016@163.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Changes in 4.19.6
HID: steam: remove input device when a hid client is running.
efi/libstub: arm: support building with clang
usb: core: Fix hub port connection events lost
usb: dwc3: gadget: fix ISOC TRB type on unaligned transfers
usb: dwc3: gadget: Properly check last unaligned/zero chain TRB
usb: dwc3: core: Clean up ULPI device
usb: dwc3: Fix NULL pointer exception in dwc3_pci_remove()
xhci: Fix leaking USB3 shared_hcd at xhci removal
xhci: handle port status events for removed USB3 hcd
xhci: Add check for invalid byte size error when UAS devices are connected.
usb: xhci: fix uninitialized completion when USB3 port got wrong status
usb: xhci: fix timeout for transition from RExit to U0
xhci: Add quirk to workaround the errata seen on Cavium Thunder-X2 Soc
usb: xhci: Prevent bus suspend if a port connect change or polling state is detected
ALSA: oss: Use kvzalloc() for local buffer allocations
MAINTAINERS: Add Sasha as a stable branch maintainer
Documentation/security-bugs: Clarify treatment of embargoed information
Documentation/security-bugs: Postpone fix publication in exceptional cases
mmc: sdhci-pci: Try "cd" for card-detect lookup before using NULL
mmc: sdhci-pci: Workaround GLK firmware failing to restore the tuning value
gpio: don't free unallocated ida on gpiochip_add_data_with_key() error path
iwlwifi: fix wrong WGDS_WIFI_DATA_SIZE
iwlwifi: mvm: support sta_statistics() even on older firmware
iwlwifi: mvm: fix regulatory domain update when the firmware starts
iwlwifi: mvm: don't use SAR Geo if basic SAR is not used
brcmfmac: fix reporting support for 160 MHz channels
opp: ti-opp-supply: Dynamically update u_volt_min
opp: ti-opp-supply: Correct the supply in _get_optimal_vdd_voltage call
tools/power/cpupower: fix compilation with STATIC=true
v9fs_dir_readdir: fix double-free on p9stat_read error
selinux: Add __GFP_NOWARN to allocation at str_read()
Input: synaptics - avoid using uninitialized variable when probing
bfs: add sanity check at bfs_fill_super()
sctp: clear the transport of some out_chunk_list chunks in sctp_assoc_rm_peer
gfs2: Don't leave s_fs_info pointing to freed memory in init_sbd
llc: do not use sk_eat_skb()
mm: don't warn about large allocations for slab
mm/memory.c: recheck page table entry with page table lock held
tcp: do not release socket ownership in tcp_close()
drm/fb-helper: Blacklist writeback when adding connectors to fbdev
drm/amdgpu: Add missing firmware entry for HAINAN
drm/vc4: Set ->legacy_cursor_update to false when doing non-async updates
drm/amdgpu: Fix oops when pp_funcs->switch_power_profile is unset
drm/i915: Disable LP3 watermarks on all SNB machines
drm/ast: change resolution may cause screen blurred
drm/ast: fixed cursor may disappear sometimes
drm/ast: Remove existing framebuffers before loading driver
can: flexcan: Unlock the MB unconditionally
can: dev: can_get_echo_skb(): factor out non sending code to __can_get_echo_skb()
can: dev: __can_get_echo_skb(): replace struct can_frame by canfd_frame to access frame length
can: dev: __can_get_echo_skb(): Don't crash the kernel if can_priv::echo_skb is accessed out of bounds
can: dev: __can_get_echo_skb(): print error message, if trying to echo non existing skb
can: rx-offload: introduce can_rx_offload_get_echo_skb() and can_rx_offload_queue_sorted() functions
can: rx-offload: rename can_rx_offload_irq_queue_err_skb() to can_rx_offload_queue_tail()
can: flexcan: use can_rx_offload_queue_sorted() for flexcan_irq_bus_*()
can: flexcan: handle tx-complete CAN frames via rx-offload infrastructure
can: raw: check for CAN FD capable netdev in raw_sendmsg()
can: hi311x: Use level-triggered interrupt
can: flexcan: Always use last mailbox for TX
can: flexcan: remove not needed struct flexcan_priv::tx_mb and struct flexcan_priv::tx_mb_idx
ACPICA: AML interpreter: add region addresses in global list during initialization
IB/hfi1: Eliminate races in the SDMA send error path
fsnotify: generalize handling of extra event flags
fanotify: fix handling of events on child sub-directory
pinctrl: meson: fix pinconf bias disable
pinctrl: meson: fix gxbb ao pull register bits
pinctrl: meson: fix gxl ao pull register bits
pinctrl: meson: fix meson8 ao pull register bits
pinctrl: meson: fix meson8b ao pull register bits
tools/testing/nvdimm: Fix the array size for dimm devices.
scsi: lpfc: fix remoteport access
scsi: hisi_sas: Remove set but not used variable 'dq_list'
KVM: PPC: Move and undef TRACE_INCLUDE_PATH/FILE
cpufreq: imx6q: add return value check for voltage scale
rtc: cmos: Do not export alarm rtc_ops when we do not support alarms
rtc: pcf2127: fix a kmemleak caused in pcf2127_i2c_gather_write
crypto: simd - correctly take reqsize of wrapped skcipher into account
floppy: fix race condition in __floppy_read_block_0()
powerpc/io: Fix the IO workarounds code to work with Radix
sched/fair: Fix cpu_util_wake() for 'execl' type workloads
perf/x86/intel/uncore: Add more IMC PCI IDs for KabyLake and CoffeeLake CPUs
block: copy ioprio in __bio_clone_fast() and bounce
SUNRPC: Fix a bogus get/put in generic_key_to_expire()
riscv: add missing vdso_install target
RISC-V: Silence some module warnings on 32-bit
drm/amdgpu: fix bug with IH ring setup
kdb: Use strscpy with destination buffer size
NFSv4: Fix an Oops during delegation callbacks
powerpc/numa: Suppress "VPHN is not supported" messages
efi/arm: Revert deferred unmap of early memmap mapping
z3fold: fix possible reclaim races
mm, memory_hotplug: check zone_movable in has_unmovable_pages
tmpfs: make lseek(SEEK_DATA/SEK_HOLE) return ENXIO with a negative offset
mm, page_alloc: check for max order in hot path
dax: Avoid losing wakeup in dax_lock_mapping_entry
include/linux/pfn_t.h: force '~' to be parsed as an unary operator
tty: wipe buffer.
tty: wipe buffer if not echoing data
gfs2: Fix iomap buffer head reference counting bug
rcu: Make need_resched() respond to urgent RCU-QS needs
media: ov5640: Re-work MIPI startup sequence
media: ov5640: Fix timings setup code
media: ov5640: fix exposure regression
media: ov5640: fix auto gain & exposure when changing mode
media: ov5640: fix wrong binning value in exposure calculation
media: ov5640: fix auto controls values when switching to manual mode
Linux 4.19.6
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Add support for the Adiantum encryption mode. Adiantum was designed by
Paul Crowley and is specified by our paper:
Adiantum: length-preserving encryption for entry-level processors
(https://eprint.iacr.org/2018/720.pdf)
See our paper for full details; this patch only provides an overview.
Adiantum is a tweakable, length-preserving encryption mode designed for
fast and secure disk encryption, especially on CPUs without dedicated
crypto instructions. Adiantum encrypts each sector using the XChaCha12
stream cipher, two passes of an ε-almost-∆-universal (εA∆U) hash
function, and an invocation of the AES-256 block cipher on a single
16-byte block. On CPUs without AES instructions, Adiantum is much
faster than AES-XTS; for example, on ARM Cortex-A7, on 4096-byte sectors
Adiantum encryption is about 4 times faster than AES-256-XTS encryption,
and decryption about 5 times faster.
Adiantum is a specialization of the more general HBSH construction. Our
earlier proposal, HPolyC, was also a HBSH specialization, but it used a
different εA∆U hash function, one based on Poly1305 only. Adiantum's
εA∆U hash function, which is based primarily on the "NH" hash function
like that used in UMAC (RFC4418), is about twice as fast as HPolyC's;
consequently, Adiantum is about 20% faster than HPolyC.
This speed comes with no loss of security: Adiantum is provably just as
secure as HPolyC, in fact slightly *more* secure. Like HPolyC,
Adiantum's security is reducible to that of XChaCha12 and AES-256,
subject to a security bound. XChaCha12 itself has a security reduction
to ChaCha12. Therefore, one need not "trust" Adiantum; one need only
trust ChaCha12 and AES-256. Note that the εA∆U hash function is only
used for its proven combinatorical properties so cannot be "broken".
Adiantum is also a true wide-block encryption mode, so flipping any
plaintext bit in the sector scrambles the entire ciphertext, and vice
versa. No other such mode is available in the kernel currently; doing
the same with XTS scrambles only 16 bytes. Adiantum also supports
arbitrary-length tweaks and naturally supports any length input >= 16
bytes without needing "ciphertext stealing".
For the stream cipher, Adiantum uses XChaCha12 rather than XChaCha20 in
order to make encryption feasible on the widest range of devices.
Although the 20-round variant is quite popular, the best known attacks
on ChaCha are on only 7 rounds, so ChaCha12 still has a substantial
security margin; in fact, larger than AES-256's. 12-round Salsa20 is
also the eSTREAM recommendation. For the block cipher, Adiantum uses
AES-256, despite it having a lower security margin than XChaCha12 and
needing table lookups, due to AES's extensive adoption and analysis
making it the obvious first choice. Nevertheless, for flexibility this
patch also permits the "adiantum" template to be instantiated with
XChaCha20 and/or with an alternate block cipher.
We need Adiantum support in the kernel for use in dm-crypt and fscrypt,
where currently the only other suitable options are block cipher modes
such as AES-XTS. A big problem with this is that many low-end mobile
devices (e.g. Android Go phones sold primarily in developing countries,
as well as some smartwatches) still have CPUs that lack AES
instructions, e.g. ARM Cortex-A7. Sadly, AES-XTS encryption is much too
slow to be viable on these devices. We did find that some "lightweight"
block ciphers are fast enough, but these suffer from problems such as
not having much cryptanalysis or being too controversial.
The ChaCha stream cipher has excellent performance but is insecure to
use directly for disk encryption, since each sector's IV is reused each
time it is overwritten. Even restricting the threat model to offline
attacks only isn't enough, since modern flash storage devices don't
guarantee that "overwrites" are really overwrites, due to wear-leveling.
Adiantum avoids this problem by constructing a
"tweakable super-pseudorandom permutation"; this is the strongest
possible security model for length-preserving encryption.
Of course, storing random nonces along with the ciphertext would be the
ideal solution. But doing that with existing hardware and filesystems
runs into major practical problems; in most cases it would require data
journaling (like dm-integrity) which severely degrades performance.
Thus, for now length-preserving encryption is still needed.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
(cherry picked from commit 059c2a4d8e164dccc3078e49e7f286023b019a98
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master)
Conflicts:
crypto/tcrypt.c
Bug: 112008522
Test: Among other things, I ran the relevant crypto self-tests:
1.) Build kernel with CONFIG_CRYPTO_MANAGER_DISABLE_TESTS *unset*, and
all relevant crypto algorithms built-in, including:
CONFIG_CRYPTO_ADIANTUM=y
CONFIG_CRYPTO_CHACHA20=y
CONFIG_CRYPTO_CHACHA20_NEON=y
CONFIG_CRYPTO_NHPOLY1305=y
CONFIG_CRYPTO_NHPOLY1305_NEON=y
CONFIG_CRYPTO_POLY1305=y
CONFIG_CRYPTO_AES=y
CONFIG_CRYPTO_AES_ARM=y
2.) Boot and check dmesg for test failures.
3.) Instantiate "adiantum(xchacha12,aes)" and
"adiantum(xchacha20,aes)" to trigger them to be tested. There are
many ways to do this, but one way is to create a dm-crypt target
that uses them, e.g.
key=$(hexdump -n 32 -e '16/4 "%08X" 1 "\n"' /dev/urandom)
dmsetup create crypt --table "0 $((1<<17)) crypt xchacha12,aes-adiantum-plain64 $key 0 /dev/vdc 0"
dmsetup remove crypt
dmsetup create crypt --table "0 $((1<<17)) crypt xchacha20,aes-adiantum-plain64 $key 0 /dev/vdc 0"
dmsetup remove crypt
4.) Check dmesg for test failures again.
5.) Do 1-4 on both x86_64 (for basic testing) and on arm32 (for
testing the ARM32-specific implementations). I did the arm32 kernel
testing on Raspberry Pi 2, which is a BCM2836-based device that can
run the upstream and Android common kernels.
The same ARM32 assembly files for ChaCha, NHPoly1305, and AES are
also included in the userspace Adiantum benchmark suite at
https://github.com/google/adiantum, where they have undergone
additional correctness testing.
Change-Id: Ic61c13b53facfd2173065be715a7ee5f3af8760b
Signed-off-by: Eric Biggers <ebiggers@google.com>
Add a generic implementation of NHPoly1305, an ε-almost-∆-universal hash
function used in the Adiantum encryption mode.
CONFIG_NHPOLY1305 is not selectable by itself since there won't be any
real reason to enable it without also enabling Adiantum support.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
(cherry picked from commit 26609a21a9460145e37d90947ad957b358a05288
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master)
Bug: 112008522
Test: As series, see Ic61c13b53facfd2173065be715a7ee5f3af8760b
Change-Id: If6f00c01fab530fc2458c44ca111f84604cb85c1
Signed-off-by: Eric Biggers <ebiggers@google.com>
Expose a low-level Poly1305 API which implements the
ε-almost-∆-universal (εA∆U) hash function underlying the Poly1305 MAC
and supports block-aligned inputs only.
This is needed for Adiantum hashing, which builds an εA∆U hash function
from NH and a polynomial evaluation in GF(2^{130}-5); this polynomial
evaluation is identical to the one the Poly1305 MAC does. However, the
crypto_shash Poly1305 API isn't very appropriate for this because its
calling convention assumes it is used as a MAC, with a 32-byte "one-time
key" provided for every digest.
But by design, in Adiantum hashing the performance of the polynomial
evaluation isn't nearly as critical as NH. So it suffices to just have
some C helper functions. Thus, this patch adds such functions.
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
(cherry picked from commit 1b6fd3d5d18bbc1b1abf3b0cbc4b95a9a63d407b
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master)
Bug: 112008522
Test: As series, see Ic61c13b53facfd2173065be715a7ee5f3af8760b
Change-Id: I5c7da7832b84dfe29c300e117a158740d3e39069
Signed-off-by: Eric Biggers <ebiggers@google.com>
In preparation for exposing a low-level Poly1305 API which implements
the ε-almost-∆-universal (εA∆U) hash function underlying the Poly1305
MAC and supports block-aligned inputs only, create structures
poly1305_key and poly1305_state which hold the limbs of the Poly1305
"r" key and accumulator, respectively.
These structures could actually have the same type (e.g. poly1305_val),
but different types are preferable, to prevent misuse.
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
(cherry picked from commit 878afc35cd28bcd93cd3c5e1985ef39a104a4d45
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master)
Bug: 112008522
Test: As series, see Ic61c13b53facfd2173065be715a7ee5f3af8760b
Change-Id: If20a0f9d29d8ba1efd43a5eb3fafce7720afe565
Signed-off-by: Eric Biggers <ebiggers@google.com>
Now that the generic implementation of ChaCha20 has been refactored to
allow varying the number of rounds, add support for XChaCha12, which is
the XSalsa construction applied to ChaCha12. ChaCha12 is one of the
three ciphers specified by the original ChaCha paper
(https://cr.yp.to/chacha/chacha-20080128.pdf: "ChaCha, a variant of
Salsa20"), alongside ChaCha8 and ChaCha20. ChaCha12 is faster than
ChaCha20 but has a lower, but still large, security margin.
We need XChaCha12 support so that it can be used in the Adiantum
encryption mode, which enables disk/file encryption on low-end mobile
devices where AES-XTS is too slow as the CPUs lack AES instructions.
We'd prefer XChaCha20 (the more popular variant), but it's too slow on
some of our target devices, so at least in some cases we do need the
XChaCha12-based version. In more detail, the problem is that Adiantum
is still much slower than we're happy with, and encryption still has a
quite noticeable effect on the feel of low-end devices. Users and
vendors push back hard against encryption that degrades the user
experience, which always risks encryption being disabled entirely. So
we need to choose the fastest option that gives us a solid margin of
security, and here that's XChaCha12. The best known attack on ChaCha
breaks only 7 rounds and has 2^235 time complexity, so ChaCha12's
security margin is still better than AES-256's. Much has been learned
about cryptanalysis of ARX ciphers since Salsa20 was originally designed
in 2005, and it now seems we can be comfortable with a smaller number of
rounds. The eSTREAM project also suggests the 12-round version of
Salsa20 as providing the best balance among the different variants:
combining very good performance with a "comfortable margin of security".
Note that it would be trivial to add vanilla ChaCha12 in addition to
XChaCha12. However, it's unneeded for now and therefore is omitted.
As discussed in the patch that introduced XChaCha20 support, I
considered splitting the code into separate chacha-common, chacha20,
xchacha20, and xchacha12 modules, so that these algorithms could be
enabled/disabled independently. However, since nearly all the code is
shared anyway, I ultimately decided there would have been little benefit
to the added complexity.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
(cherry picked from commit aa7624093cb7fbf4fea95e612580d8d29a819f67
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master)
Bug: 112008522
Test: As series, see Ic61c13b53facfd2173065be715a7ee5f3af8760b
Change-Id: I876a5be92e9f583effcd35a4b66a36608ac581f0
Signed-off-by: Eric Biggers <ebiggers@google.com>
In preparation for adding XChaCha12 support, rename/refactor
chacha20-generic to support different numbers of rounds. The
justification for needing XChaCha12 support is explained in more detail
in the patch "crypto: chacha - add XChaCha12 support".
The only difference between ChaCha{8,12,20} are the number of rounds
itself; all other parts of the algorithm are the same. Therefore,
remove the "20" from all definitions, structures, functions, files, etc.
that will be shared by all ChaCha versions.
Also make ->setkey() store the round count in the chacha_ctx (previously
chacha20_ctx). The generic code then passes the round count through to
chacha_block(). There will be a ->setkey() function for each explicitly
allowed round count; the encrypt/decrypt functions will be the same. I
decided not to do it the opposite way (same ->setkey() function for all
round counts, with different encrypt/decrypt functions) because that
would have required more boilerplate code in architecture-specific
implementations of ChaCha and XChaCha.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
(cherry picked from commit 1ca1b917940c24ca3d1f490118c5474168622953
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master)
Conflicts:
arch/x86/crypto/chacha20_glue.c
drivers/crypto/caam/caamalg.c
drivers/crypto/caam/caamalg_qi2.c
drivers/crypto/caam/compat.h
include/crypto/chacha20.h
Bug: 112008522
Test: As series, see Ic61c13b53facfd2173065be715a7ee5f3af8760b
Change-Id: I7fa203ddc7095ce8675a32f49b8a5230cd0cf5f6
Signed-off-by: Eric Biggers <ebiggers@google.com>
Add support for the XChaCha20 stream cipher. XChaCha20 is the
application of the XSalsa20 construction
(https://cr.yp.to/snuffle/xsalsa-20081128.pdf) to ChaCha20 rather than
to Salsa20. XChaCha20 extends ChaCha20's nonce length from 64 bits (or
96 bits, depending on convention) to 192 bits, while provably retaining
ChaCha20's security. XChaCha20 uses the ChaCha20 permutation to map the
key and first 128 nonce bits to a 256-bit subkey. Then, it does the
ChaCha20 stream cipher with the subkey and remaining 64 bits of nonce.
We need XChaCha support in order to add support for the Adiantum
encryption mode. Note that to meet our performance requirements, we
actually plan to primarily use the variant XChaCha12. But we believe
it's wise to first add XChaCha20 as a baseline with a higher security
margin, in case there are any situations where it can be used.
Supporting both variants is straightforward.
Since XChaCha20's subkey differs for each request, XChaCha20 can't be a
template that wraps ChaCha20; that would require re-keying the
underlying ChaCha20 for every request, which wouldn't be thread-safe.
Instead, we make XChaCha20 its own top-level algorithm which calls the
ChaCha20 streaming implementation internally.
Similar to the existing ChaCha20 implementation, we define the IV to be
the nonce and stream position concatenated together. This allows users
to seek to any position in the stream.
I considered splitting the code into separate chacha20-common, chacha20,
and xchacha20 modules, so that chacha20 and xchacha20 could be
enabled/disabled independently. However, since nearly all the code is
shared anyway, I ultimately decided there would have been little benefit
to the added complexity of separate modules.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
(cherry picked from commit de61d7ae5d3789dcba3749a418f76613fbee8414
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master)
Bug: 112008522
Test: As series, see Ic61c13b53facfd2173065be715a7ee5f3af8760b
Change-Id: I5c878e1d6577abda11d7b737cbb650baf16b6886
Signed-off-by: Eric Biggers <ebiggers@google.com>
Make the ARM scalar AES implementation closer to constant-time by
disabling interrupts and prefetching the tables into L1 cache. This is
feasible because due to ARM's "free" rotations, the main tables are only
1024 bytes instead of the usual 4096 used by most AES implementations.
On ARM Cortex-A7, the speed loss is only about 5%. The resulting code
is still over twice as fast as aes_ti.c. Responsiveness is potentially
a concern, but interrupts are only disabled for a single AES block.
Note that even after these changes, the implementation still isn't
necessarily guaranteed to be constant-time; see
https://cr.yp.to/antiforgery/cachetiming-20050414.pdf for a discussion
of the many difficulties involved in writing truly constant-time AES
software. But it's valuable to make such attacks more difficult.
Much of this patch is based on patches suggested by Ard Biesheuvel.
Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
(cherry picked from commit 913a3aa07d16e5b302f408d497a4b829910de247
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master)
Bug: 112008522
Test: As series, see Ic61c13b53facfd2173065be715a7ee5f3af8760b
Change-Id: I453a7b71c3bb0051106b37cdb71d4511fd4e388a
Signed-off-by: Eric Biggers <ebiggers@google.com>
In commit 9f480faec5 ("crypto: chacha20 - Fix keystream alignment for
chacha20_block()"), I had missed that chacha20_block() can be called
directly on the buffer passed to get_random_bytes(), which can have any
alignment. So, while my commit didn't break anything, it didn't fully
solve the alignment problems.
Revert my solution and just update chacha20_block() to use
put_unaligned_le32(), so the output buffer need not be aligned.
This is simpler, and on many CPUs it's the same speed.
But, I kept the 'tmp' buffers in extract_crng_user() and
_get_random_bytes() 4-byte aligned, since that alignment is actually
needed for _crng_backtrack_protect() too.
Reported-by: Stephan Müller <smueller@chronox.de>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
(cherry picked from commit a5e9f557098e54af44ade5d501379be18435bfbf)
Bug: 112008522
Test: As series, see Ic61c13b53facfd2173065be715a7ee5f3af8760b
Change-Id: Ic355d2416330ae2f4a50cb7064633810e35a93bf
Signed-off-by: Eric Biggers <ebiggers@google.com>
[ Upstream commit 508a1c4df085a547187eed346f1bfe5e381797f1 ]
The simd wrapper's skcipher request context structure consists
of a single subrequest whose size is taken from the subordinate
skcipher. However, in simd_skcipher_init(), the reqsize that is
retrieved is not from the subordinate skcipher but from the
cryptd request structure, whose size is completely unrelated to
the actual wrapped skcipher.
Reported-by: Qian Cai <cai@gmx.us>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Qian Cai <cai@gmx.us>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sashal@kernel.org>
commit f43f39958beb206b53292801e216d9b8a660f087 upstream.
All bytes of the NETLINK_CRYPTO report structures must be initialized,
since they are copied to userspace. The change from strncpy() to
strlcpy() broke this. As a minimal fix, change it back.
Fixes: 4473710df1 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME expansion")
Cc: <stable@vger.kernel.org> # v4.12+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 578bdaabd015b9b164842c3e8ace9802f38e7ecc upstream.
These are unused, undesired, and have never actually been used by
anybody. The original authors of this code have changed their mind about
its inclusion. While originally proposed for disk encryption on low-end
devices, the idea was discarded [1] in favor of something else before
that could really get going. Therefore, this patch removes Speck.
[1] https://marc.info/?l=linux-crypto-vger&m=153359499015659
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Acked-by: Eric Biggers <ebiggers@google.com>
Cc: stable@vger.kernel.org
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4a34e3c2f2f48f47213702a84a123af0fe21ad60 upstream.
Use the correct __le32 annotation and accessors to perform the
single round of AES encryption performed inside the AEGIS transform.
Otherwise, tcrypt reports:
alg: aead: Test 1 failed on encryption for aegis128-generic
00000000: 6c 25 25 4a 3c 10 1d 27 2b c1 d4 84 9a ef 7f 6e
alg: aead: Test 1 failed on encryption for aegis128l-generic
00000000: cd c6 e3 b8 a0 70 9d 8e c2 4f 6f fe 71 42 df 28
alg: aead: Test 1 failed on encryption for aegis256-generic
00000000: aa ed 07 b1 96 1d e9 e6 f2 ed b5 8e 1c 5f dc 1c
Fixes: f606a88e58 ("crypto: aegis - Add generic AEGIS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5a8dedfa3276e88c5865f265195d63d72aec3e72 upstream.
Omit the endian swabbing when folding the lengths of the assoc and
crypt input buffers into the state to finalize the tag. This is not
necessary given that the memory representation of the state is in
machine native endianness already.
This fixes an error reported by tcrypt running on a big endian system:
alg: aead: Test 2 failed on encryption for morus640-generic
00000000: a8 30 ef fb e6 26 eb 23 b0 87 dd 98 57 f3 e1 4b
00000010: 21
alg: aead: Test 2 failed on encryption for morus1280-generic
00000000: 88 19 1b fb 1c 29 49 0e ee 82 2f cb 97 a6 a5 ee
00000010: 5f
Fixes: 396be41f16 ("crypto: morus - Add generic MORUS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fbe1a850b3b1522e9fc22319ccbbcd2ab05328d2 upstream.
When the LRW block counter overflows, the current implementation returns
128 as the index to the precomputed multiplication table, which has 128
entries. This patch fixes it to return the correct value (127).
Fixes: 64470f1b85 ("[CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher mode")
Cc: <stable@vger.kernel.org> # 2.6.20+
Reported-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 89ab066d4229acd32e323f1569833302544a4186 ]
This reverts commit dd979b4df8.
This broke tcp_poll for SMC fallback: An AF_SMC socket establishes an
internal TCP socket for the initial handshake with the remote peer.
Whenever the SMC connection can not be established this TCP socket is
used as a fallback. All socket operations on the SMC socket are then
forwarded to the TCP socket. In case of poll, the file->private_data
pointer references the SMC socket because the TCP socket has no file
assigned. This causes tcp_poll to wait on the wrong socket.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pull DMAengine updates from Vinod Koul:
"This round brings couple of framework changes, a new driver and usual
driver updates:
- new managed helper for dmaengine framework registration
- split dmaengine pause capability to pause and resume and allow
drivers to report that individually
- update dma_request_chan_by_mask() to handle deferred probing
- move imx-sdma to use virt-dma
- new driver for Actions Semi Owl family S900 controller
- minor updates to intel, renesas, mv_xor, pl330 etc"
* tag 'dmaengine-4.19-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (46 commits)
dmaengine: Add Actions Semi Owl family S900 DMA driver
dt-bindings: dmaengine: Add binding for Actions Semi Owl SoCs
dmaengine: sh: rcar-dmac: Should not stop the DMAC by rcar_dmac_sync_tcr()
dmaengine: mic_x100_dma: use the new helper to simplify the code
dmaengine: add a new helper dmaenginem_async_device_register
dmaengine: imx-sdma: add memcpy interface
dmaengine: imx-sdma: add SDMA_BD_MAX_CNT to replace '0xffff'
dmaengine: dma_request_chan_by_mask() to handle deferred probing
dmaengine: pl330: fix irq race with terminate_all
dmaengine: Revert "dmaengine: mv_xor_v2: enable COMPILE_TEST"
dmaengine: mv_xor_v2: use {lower,upper}_32_bits to configure HW descriptor address
dmaengine: mv_xor_v2: enable COMPILE_TEST
dmaengine: mv_xor_v2: move unmap to before callback
dmaengine: mv_xor_v2: convert callback to helper function
dmaengine: mv_xor_v2: kill the tasklets upon exit
dmaengine: mv_xor_v2: explicitly freeup irq
dmaengine: sh: rcar-dmac: Add dma_pause operation
dmaengine: sh: rcar-dmac: add a new function to clear CHCR.DE with barrier
dmaengine: idma64: Support dmaengine_terminate_sync()
dmaengine: hsu: Support dmaengine_terminate_sync()
...
Pull integrity updates from James Morris:
"This adds support for EVM signatures based on larger digests, contains
a new audit record AUDIT_INTEGRITY_POLICY_RULE to differentiate the
IMA policy rules from the IMA-audit messages, addresses two deadlocks
due to either loading or searching for crypto algorithms, and cleans
up the audit messages"
* 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
EVM: fix return value check in evm_write_xattrs()
integrity: prevent deadlock during digsig verification.
evm: Allow non-SHA1 digital signatures
evm: Don't deadlock if a crypto algorithm is unavailable
integrity: silence warning when CONFIG_SECURITYFS is not enabled
ima: Differentiate auditing policy rules from "audit" actions
ima: Do not audit if CONFIG_INTEGRITY_AUDIT is not set
ima: Use audit_log_format() rather than audit_log_string()
ima: Call audit_log_string() rather than logging it untrusted
Make it return -EINVAL if crypto_dh_key_len() is incorrect rather than
overflowing the buffer.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It was forgotten to increase DH_KPP_SECRET_MIN_SIZE to include 'q_size',
causing an out-of-bounds write of 4 bytes in crypto_dh_encode_key(), and
an out-of-bounds read of 4 bytes in crypto_dh_decode_key(). Fix it, and
fix the lengths of the test vectors to match this.
Reported-by: syzbot+6d38d558c25b53b8f4ed@syzkaller.appspotmail.com
Fixes: e3fe0ae129 ("crypto: dh - add public key verification test")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Like the skcipher_walk and blkcipher_walk cases:
scatterwalk_done() is only meant to be called after a nonzero number of
bytes have been processed, since scatterwalk_pagedone() will flush the
dcache of the *previous* page. But in the error case of
ablkcipher_walk_done(), e.g. if the input wasn't an integer number of
blocks, scatterwalk_done() was actually called after advancing 0 bytes.
This caused a crash ("BUG: unable to handle kernel paging request")
during '!PageSlab(page)' on architectures like arm and arm64 that define
ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
page-aligned as in that case walk->offset == 0.
Fix it by reorganizing ablkcipher_walk_done() to skip the
scatterwalk_advance() and scatterwalk_done() if an error has occurred.
Reported-by: Liu Chao <liuchao741@huawei.com>
Fixes: bf06099db1 ("crypto: skcipher - Add ablkcipher_walk interfaces")
Cc: <stable@vger.kernel.org> # v2.6.35+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Like the skcipher_walk case:
scatterwalk_done() is only meant to be called after a nonzero number of
bytes have been processed, since scatterwalk_pagedone() will flush the
dcache of the *previous* page. But in the error case of
blkcipher_walk_done(), e.g. if the input wasn't an integer number of
blocks, scatterwalk_done() was actually called after advancing 0 bytes.
This caused a crash ("BUG: unable to handle kernel paging request")
during '!PageSlab(page)' on architectures like arm and arm64 that define
ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
page-aligned as in that case walk->offset == 0.
Fix it by reorganizing blkcipher_walk_done() to skip the
scatterwalk_advance() and scatterwalk_done() if an error has occurred.
This bug was found by syzkaller fuzzing.
Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:
#include <linux/if_alg.h>
#include <sys/socket.h>
#include <unistd.h>
int main()
{
struct sockaddr_alg addr = {
.salg_type = "skcipher",
.salg_name = "ecb(aes-generic)",
};
char buffer[4096] __attribute__((aligned(4096))) = { 0 };
int fd;
fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(fd, (void *)&addr, sizeof(addr));
setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
fd = accept(fd, NULL, NULL);
write(fd, buffer, 15);
read(fd, buffer, 15);
}
Reported-by: Liu Chao <liuchao741@huawei.com>
Fixes: 5cde0af2a9 ("[CRYPTO] cipher: Added block cipher type")
Cc: <stable@vger.kernel.org> # v2.6.19+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
scatterwalk_done() is only meant to be called after a nonzero number of
bytes have been processed, since scatterwalk_pagedone() will flush the
dcache of the *previous* page. But in the error case of
skcipher_walk_done(), e.g. if the input wasn't an integer number of
blocks, scatterwalk_done() was actually called after advancing 0 bytes.
This caused a crash ("BUG: unable to handle kernel paging request")
during '!PageSlab(page)' on architectures like arm and arm64 that define
ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
page-aligned as in that case walk->offset == 0.
Fix it by reorganizing skcipher_walk_done() to skip the
scatterwalk_advance() and scatterwalk_done() if an error has occurred.
This bug was found by syzkaller fuzzing.
Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:
#include <linux/if_alg.h>
#include <sys/socket.h>
#include <unistd.h>
int main()
{
struct sockaddr_alg addr = {
.salg_type = "skcipher",
.salg_name = "cbc(aes-generic)",
};
char buffer[4096] __attribute__((aligned(4096))) = { 0 };
int fd;
fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(fd, (void *)&addr, sizeof(addr));
setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
fd = accept(fd, NULL, NULL);
write(fd, buffer, 15);
read(fd, buffer, 15);
}
Reported-by: Liu Chao <liuchao741@huawei.com>
Fixes: b286d8b1a6 ("crypto: skcipher - Add skcipher walk interface")
Cc: <stable@vger.kernel.org> # v4.10+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Setting 'walk->nbytes = walk->total' in skcipher_walk_first() doesn't
make sense because actually walk->nbytes needs to be set to the length
of the first step in the walk, which may be less than walk->total. This
is done by skcipher_walk_next() which is called immediately afterwards.
Also walk->nbytes was already set to 0 in skcipher_walk_skcipher(),
which is a better default value in case it's forgotten to be set later.
Therefore, remove the unnecessary assignment to walk->nbytes.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All callers pass chain=0 to scatterwalk_crypto_chain().
Remove this unneeded parameter.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The ALIGN() macro needs to be passed the alignment, not the alignmask
(which is the alignment minus 1).
Fixes: b286d8b1a6 ("crypto: skcipher - Add skcipher walk interface")
Cc: <stable@vger.kernel.org> # v4.10+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Avoid RCU stalls in the case of non-preemptible kernel and lengthy
speed tests by rescheduling when advancing from one block size
to another.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The wait_address argument is always directly derived from the filp
argument, so remove it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Make use of the swap macro and remove unnecessary variable *tmp*.
This makes the code easier to read and maintain.
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make use of the swap macro and remove unnecessary variable *tmp*.
This makes the code easier to read and maintain.
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix the b value to be compliant with FIPS 186-4 D.1.2.1. This fix is
required to make sure the SP800-56A public key test passes for P-192.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
By adding a zero byte-length for the DH parameter Q value, the public
key verification test is disabled for the given test.
Reported-by: Eric Biggers <ebiggers3@gmail.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CTR DRBG requires two SGLs pointing to input/output buffers for the
CTR AES operation. The used SGLs always have only one entry. Thus, the
SGL can be initialized during allocation time, preventing a
re-initialization of the SGLs during each call.
The performance is increased by about 1 to 3 percent depending on the
size of the requested buffer size.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In case memory resources for *base* were allocated, release them
before return.
Addresses-Coverity-ID: 1471702 ("Resource leak")
Fixes: e3fe0ae129 ("crypto: dh - add public key verification test")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Reviewed-by: Stephan Müller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto fix from Herbert Xu:
"This fixes an allocation error-path bug in af_alg discovered by
syzkaller"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: af_alg - Initialize sg_num_bytes in error code path
When EVM attempts to appraise a file signed with a crypto algorithm the
kernel doesn't have support for, it will cause the kernel to trigger a
module load. If the EVM policy includes appraisal of kernel modules this
will in turn call back into EVM - since EVM is holding a lock until the
crypto initialisation is complete, this triggers a deadlock. Add a
CRYPTO_NOLOAD flag and skip module loading if it's set, and add that flag
in the EVM case in order to fail gracefully with an error message
instead of deadlocking.
Signed-off-by: Matthew Garrett <mjg59@google.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
The testmgr hash tests were testing init, digest, update and final
methods but not the finup method. Add a test for this one too.
While doing this, make sure we only run the partial tests once with
the digest tests and skip them with the final and finup tests since
they are the same.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some aead algorithms set .cra_flags = CRYPTO_ALG_TYPE_AEAD. But this is
redundant with the C structure type ('struct aead_alg'), and
crypto_register_aead() already sets the type flag automatically,
clearing any type flag that was already there. Apparently the useless
assignment has just been copy+pasted around.
So, remove the useless assignment from all the aead algorithms.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Many shash algorithms set .cra_flags = CRYPTO_ALG_TYPE_SHASH. But this
is redundant with the C structure type ('struct shash_alg'), and
crypto_register_shash() already sets the type flag automatically,
clearing any type flag that was already there. Apparently the useless
assignment has just been copy+pasted around.
So, remove the useless assignment from all the shash algorithms.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
sha512-generic and sha384-generic had a cra_priority of 0, so it wasn't
possible to have a lower priority SHA-512 or SHA-384 implementation, as
is desired for sha512_mb which is only useful under certain workloads
and is otherwise extremely slow. Change them to priority 100, which is
the priority used for many of the other generic algorithms.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>