Search This Blog
Saturday, 25 April 2020
MINH OAN CHO LÂM XUNG - PHẦN I. KHÁI LƯỢC ÂM MƯU LUẬN - TỪ GIẢI CẤU ĐẾN GIẢI MẬT.
Nếu một trong mấy sợi cáp của cầu Cổng Vàng bị đứt, thì liệu cầu có bị sập không?
Friday, 24 April 2020
Folder Indexing Help - voidtools forum
Folder indexing requires Everything 1.3 or later.
What type of folders can I add to the index?
- Network share or mapped network drive.
- FAT32 and other volumes.
- Any physical folder.
No, folder indexing does not require administrative privileges or the Everything service.
Why is indexing so slow?
Folder indexing uses the same approach as the Windows search.
This can be a lot slower than NTFS indexing.
Everything can take a couple minutes to scan a folder and all it's subfolders and files.
How do I add a folder to the Everything index?
- In Everything, from the Tools menu, click Options.
- Click the Folders tab.
- Click Add....
- Select a folder to add to the Everything index.
- Click OK.
- Click OK.
- In Everything, from the Tools menu, click Options.
- Click the Folders tab.
- Click Add....
- Select the network share to add to the Everything index, for example:
CODE: SELECT ALL
\\server\share
- Click OK.
- Click OK.
No, not all file name changes can be detected.
Changes made remotely are not detected.
Lots of changes in a small amount of time can be missed.
You can specify an update time or update interval to rescan the entire folder for changes that might have been missed.
What happens if the indexed folder is offline or not available?
The folder index will remain unchanged.
However, forcing a index rebuild will show the folder as empty.
Everything will continue to re-scan the folder at the specified update time or update interval and only update the folder when it is online.
Displaying icons and file information of offline folders can take several seconds to time-out.
You can press F5 to refresh this cache when the folder is back online.
Thursday, 23 April 2020
What is the difference between HTTP and FTP? - Quora
In active mode, the data channel is established by the server, while in passive mode, it is the client that establishes the data channel. (The server is passive in this mode, hence the name.)
Active Mode
In active mode, the client lets the server know, on which port, it is listening for the data. The server then establishes the connection and transfers data on this channel.
The problem with this approach is that the client may be behind a firewall, and the firewall may not configured to accept connections from the server. This is very common, because the end user may not be experienced enough to configure his firewall.
This is where the passive mode helps.
Passive Mode
In passive mode, the client lets the server know, on which port it is listening for data. However, the server does not establish the connection. Instead it tells the client, on which port, it is sending the data from. The client then establishes the connection. The server then transfers data on this channel.
FTP vs HTTP
FTP vs HTTP
This is an attempt to document the primary differences between FTP and HTTP, as this is commonly asked and also a lot of misconceptions (and outright lies) are flying around. If you find any errors, or have additional stuff to add, please email me, file an issue or post a pull-request!
Both protocols are used for uploads and downloads on the internet, for text and for binary, both over TCP/IP. But there are a lot of differences in the details:
Transfer Speed
Possibly the most common question: which is faster for transfers?
Given all details on this page. What makes FTP faster:
- No added meta-data in the sent files, just the raw binary
- Never chunked encoding "overhead"
What makes HTTP faster:
- reusing existing persistent connections make better TCP performance
- pipelining makes asking for multiple files from the same server faster
- (automatic) compression makes less data get sent
- no command/response flow minimizes extra round-trips
Ultimately the net outcome of course differs depending on specific details, but I would say that for single-shot static files, you won't be able to measure a difference. For a single shot small file, you might get it faster with FTP (unless the server is at a long round-trip distance). When getting multiple files, HTTP should be the faster one.
Age
FTP (RFC959) appeared roughly ten years before HTTP was invented. FTP was the one and only protocol back then. The initial traces of what become RFC 959 can be found already as early as 1971.
Upload
Both protocols offer uploads. FTP has an "append" command, where HTTP is more of a "here's data coming now you deal with it" approach.
It could be worth noticing that WebDAV is a protocol on top of HTTP that provides "filesystem-like" abilities
ASCII/binary/EBCDIC
FTP has a notion of file format so it can transfer data as ASCII or binary (and more) where HTTP always sends things binary. FTP thus also allows text conversions when files are sent between systems of different sorts:
If the destination uses a different scheme for encoding End-Of-Line characters ftp will correct it for the destination. For example unix uses only a NL (newLine x'0A') character and MS windows uses CR and LF (Carriage Return and LineFeed x'0D0A'). EBCDIC specifies that a translation be performed from ASCII to EBCDIC (used on old mainframes).
HTTP provides meta-data with files, Content-Type, which clients use but FTP has no such thing. The meta data can thus be used by clients to interpret the contents accordingly.
Headers
Transfers with HTTP always also include a set of headers that send meta data. FTP does not send such headers. When sending small files, the headers can be a significant part of the amount of actual data transferred. HTTP headers contain info about things such as last modified date, character encoding, server name and version and more.
Pipelining
HTTP supports pipelining. It means that a client can ask for the next transfer already before the previous one has ended, which thus allows multiple documents to get sent without a round-trip delay between the documents, and TCP packets are thus optimized for transfer speed.
Something related, although not similar, is FTP's support for requesting multiple files to get transferred in parallel using the same control connection. That's of course using new TCP connections for each transfer so it'll get different performance metrics. Also, this requires that the server supports doing this sort of operation (ie accepting new commands while there is a transfer in progress), which many servers will not.
FTP Command/Response
FTP involves the client sending commands to which the server responds. A single transfer can involve quite a series of commands. This of course has a negative impact since there's a round-trip delay for each command. HTTP transfers are primarily just one request and one response (for each document). Retrieving a single FTP file can easily get up to 10 round-trips.
Two Connections
One of the biggest hurdles about FTP in real life is its use of two connections. It uses a first primary connection to send control commands on, and when it sends or receives data, it opens a second TCP stream for that purpose.
Firewalls and NATs
FTP's use of two connections, where the second one use dynamic port numbers and can go in either direction, gives the firewall admins grief and firewalls really have to "understand" FTP at the application protocol layer to work really well.
This also means that if both parties are behind NATs, you cannot use FTP!
Additionally, as NATs often are setup to kill idle connections and the nature of FTP makes the control channel remain quiet during long and slow FTP transfers, we often end up with the control channel getting cut off by the NAT due to idleness.
Active and Passive
FTP opens the second connection in an active or passive mode, which basically says which end that initiates it. It's a client decision to try either way.
Encrypted Control Connections
Since firewalls need to understand FTP to be able to open ports for the secondary connection etc, there's a huge problem with encrypted FTP (FTP-SSL or FTPS) since then the control connection is sent encrypted and the firewall(s) cannot interpret the commands that deal with creating the second connection. Also, the FTPS standard took a very long time to "hit it" so there exists a range of hybrid versions out in the wild.
Authentications
FTP and HTTP have a different set of authentication methods documented. While both protocols offer basically plain-text user and password by default, there are several commonly used authentication methods for HTTP that isn't sending the password as plain text, but there aren't as many (non-kerberos) options available for FTP.
Download
Both protocols offer support for download. Both protocols used to have problems with file sizes larger than 2GB but those are history for modern clients and servers on modern operating systems.
Ranges/resume
Both FTP and HTTP support resumed transfers in both directions, but HTTP supports more advanced byte ranges.
Resumed transfers for FTP that start beyond the 2GB position has been known to cause trouble in the past but should be better these days.
Persistent Connections
For HTTP communication, a client can maintain a single connection to a server and just keep using that for any amount of transfers. FTP must create a new one for each new data transfer. Repeatedly doing new connections are bad for performance due to having to do new handshakes/connections all the time and redoing the TCP slow start period and more.
HTTP Chunked Encoding
To avoid having to close down the data connection in order to signal the end of a transfer - when the size of the transfer wasn't known when the transfer started, chunked encoding was introduced in HTTP.
During a "chunked encoding" transfer, the sending party sends a stream of [size-of-data][data] blocks over the wire until there is no more data to send and then it sends a zero-size chunk to signal the end of it.
Another obvious benefit (apart from having to re-open the connection again for next transfer) with chunked encoding compared to plain closing of the connection is the ability to detect premature connection shutdowns.
Compression
HTTP provides a way for the client and server to negotiate and choose among several compression algorithms. The gzip algorithm being the perhaps most widely used one, with brotli being a recent addition that often compresses data even better.
FTP offers an official "built-in" run length encoding that compresses the amount of data to send, but not by a great deal on ordinary binary data. It has also traditionally been done for FTP using various "hackish" approaches that were never in any FTP spec.
FXP
FTP supports "third party transfers", often called "FXP". It allows a client to ask a server to send data to a third host, a host that isn't the same as the client. This is often disabled in modern FTP servers though due to the security implications.
IPv6
HTTP and FTP both support ipv6 fine, but the original FTP spec had no such support and still today many FTP servers don't have support for the necessary commands that would enable it. This also goes for the firewalls in between that need to understand FTP.
Name based virtual hosting
Using HTTP 1.1, you can easily host many sites on the same server and they are all differentiated by their names.
In FTP, you cannot do name based virtual hosting at all until the HOST command gets implemented in the server you talk to and in the ftp client you use... It is a recent spec without many implementations.
Dir Listing
One area in which FTP stands out somewhat is that it is a protocol that is directly on file level. It means that FTP has for example commands for listing dir contents of the remote server, while HTTP has no such concept.
However, the FTP spec authors lived in a different age so the commands for listing directory contents (LIST and NLST) don't have a specified output format so it's a pain to write programs to parse the output. Latter specs (RFC3659) have addressed this with new commands like MLSD, but they are not widely implemented or supported by neither servers nor clients.
Directory listings over HTTP are usually done either by serving HTML showing the dir contents or by the use of WebDAV which is an additional protocol run "over" or in addition to HTTP.
Proxy Support
One of the biggest selling points for HTTP over FTP is its support for proxies, already built-in into the protocol from day 1. The support is so successful and well used that lots of other protocols can be sent over HTTP these days just for its ability to go through proxies.
FTP has always been used over proxies as well, but that was never standardized and was always done in lots of different ad-hoc approaches.
Further
There are further differences, like the HTTP ability to do conditional requests, negotiate content language and much more but those are not big enough to be specified in this document.
Thanks
Feedback and improvements by: Micah Cowan, Joe Touch, Austin Appel, Dennis German, Josh Hillman
FTP HTTP SMB
COMPARISON | HTTP | FTP |
---|---|---|
Basic | HTTP is used to access websites. | FTP transfers file from one one host to another. |
Connection | HTTP establishes data connection only. | FTP establishes two connection one for data and one for the control connection. |
TCP ports | HTTP uses TCP's port number 80. | FTP uses TCP's port number 20 and 21. |
URL | If you are using HTTP, http will appear in URL. | If you are using FTP, ftp will appear in URL. |
Efficient | HTTP is efficient in transferring smaller files like web pages. | FTP is efficient in transferring larger files. |
Authentication | HTTP does not require authentication. | FTP requires a password. |
Data | The content transferred to a device using HTTP is not saved to the memory of that device. | The file transferred to the host device using FTP is saved in the memory of that host device. |
Definition of HTTP
HTTP is a Hyper Text Transfer Protocol. It helps in accessing data from the World Wide Web. HTTP works similar to the combine functions of FTP and SMTP. Similar to the functioning of FTP because like FTP, it transfers file using service of TCP. But it uses only one TCP connection i.e. data connection, no separate Control Connection is used in HTTP. HTTP uses services of TCP on port no 80.
HTTP is similar to SMTP because the data transferred between client and server appear like SMTP messages. But HTTP messages are not destined to the humans for reading , they are interpreted and read by the web server and web browser. Unlike SMTP messages, HTTP messages are delivered immediately instead of storing and then forwarding.
The commands from the client side are sent in a request message to the web server. The web server sends the requested content in a response message. The HTTP does not provide any security, to enable security it is run over the Secure Socket layer.
Definition of FTP
FTP is a File Transfer Protocol. It is used to copy a file from one host to another. While copying a file from one host to another the problems that may occur are, the communicating host may have different file name conventions, may have different directory structures, different way to represent data. FTP overcomes all these problems. FTP is used when two hosts with different configurations want to exchange data between them.
FTP uses the services of TCP to transfer the file between client and server. FTP establishes two connections one for data transfer on TCP's port number 20 and one for control information (commands and responses) on TCP's port number 21. Separate connection for data and command makes FTP more efficient.
Control connection has ver simple rules for communication, but data connection has complex rules due to the variety of the data that is transferred. FTP was designed when security was not a big issue. Though FTP requires a password which is sent in a plain text that could be intercepted. So one can add Secured Socket Layer between FTP application layer and TCP layer to provide security.
Key Differences Between HTTP and FTP
- The basic difference between HTTP and FTP is that HTTP is used to access different websites on the internet. On the other hand, the FTP is used to transfer files from one host to the another.
- HTTP establishes data connection only whereas, the FTP establishes data as well as control connection.
- HTTP uses the TCP's port number 80 whereas, FTP uses TCP's port number 20 and 21.
- In case you are using HTTP, http appears in the URL of the website and if you are using FTP, ftp appears in your URL.
- HTTP is efficient to transfer smaller files like web pages whereas, FTP is efficient to transfer large files.
- HTTP does not require authentication whereas, FTP uses the password for authentication.
- Web pages or data content transferred to a device using HTTP are not saved in the memory of that device whereas, the data delivered to a device using FTP is saved in the memory of that device.
Transfer Speed
Possibly the most common question: which is faster for transfers?
Given all details on this page. What makes FTP faster:
- No added meta-data in the sent files, just the raw binary
- Never chunked encoding "overhead"
What makes HTTP faster:
- reusing existing persistent connections make better TCP performance
- pipelining makes asking for multiple files from the same server faster
- (automatic) compression makes less data get sent
- no command/response flow minimizes extra round-trips
Ultimately the net outcome of course differs depending on specific details, but I would say that for single-shot static files, you won't be able to measure a difference. For a single shot small file, you might get it faster with FTP (unless the server is at a long round-trip distance). When getting multiple files, HTTP should be the faster one.
Age
FTP (RFC959) appeared roughly ten years before HTTP was invented. FTP was the one and only protocol back then. The initial traces of what become RFC 959 can be found already as early as 1971.
Upload
Both protocols offer uploads. FTP has an "append" command, where HTTP is more of a "here's data coming now you deal with it" approach.
It could be worth noticing that WebDAV is a protocol on top of HTTP that provides "filesystem-like" abilities